perm filename AI.TXT[BB,DOC]3 blob
sn#829844 filedate 1986-12-09 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00281 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00035 00002 This file (AI.TXT[BB,DOC]) is currently volume 4 of the AI-LIST digest.
C00038 00003 ∂06-Jan-86 1349 LAWS@SRI-AI.ARPA AIList Digest V4 #1
C00063 00004 ∂08-Jan-86 1228 LAWS@SRI-AI.ARPA AIList Digest V4 #2
C00088 00005 ∂12-Jan-86 0022 LAWS@SRI-AI.ARPA AIList Digest V4 #3
C00107 00006 ∂12-Jan-86 0225 LAWS@SRI-AI.ARPA AIList Digest V4 #4
C00126 00007 ∂12-Jan-86 0425 LAWS@SRI-AI.ARPA AIList Digest V4 #5
C00150 00008 ∂15-Jan-86 1300 LAWS@SRI-AI.ARPA AIList Digest V4 #6
C00170 00009 ∂15-Jan-86 1525 LAWS@SRI-AI.ARPA AIList Digest V4 #7
C00185 00010 ∂15-Jan-86 1819 LAWS@SRI-AI.ARPA AIList Digest V4 #8
C00203 00011 ∂20-Jan-86 1619 LAWS@SRI-AI.ARPA AIList Digest V4 #10
C00219 00012 ∂20-Jan-86 1828 LAWS@SRI-AI.ARPA AIList Digest V4 #9
C00232 00013 ∂22-Jan-86 1323 LAWS@SRI-AI.ARPA AIList Digest V4 #11
C00253 00014 ∂22-Jan-86 1604 LAWS@SRI-AI.ARPA AIList Digest V4 #12
C00272 00015 ∂22-Jan-86 1833 LAWS@SRI-AI.ARPA AIList Digest V4 #13
C00291 00016 ∂24-Jan-86 1537 LAWS@SRI-AI.ARPA AIList Digest V4 #14
C00317 00017 ∂24-Jan-86 2029 LAWS@SRI-AI.ARPA AIList Digest V4 #15
C00335 00018 ∂29-Jan-86 2343 LAWS@SRI-AI.ARPA AIList Digest V4 #16
C00363 00019 ∂30-Jan-86 0155 LAWS@SRI-AI.ARPA AIList Digest V4 #17
C00387 00020 ∂30-Jan-86 0336 LAWS@SRI-AI.ARPA AIList Digest V4 #18
C00410 00021 ∂03-Feb-86 1355 LAWS@SRI-AI.ARPA AIList Digest V4 #19
C00427 00022 ∂07-Feb-86 1353 LAWS@SRI-AI.ARPA AIList Digest V4 #20
C00441 00023 ∂07-Feb-86 1707 LAWS@SRI-AI.ARPA AIList Digest V4 #21
C00463 00024 ∂10-Feb-86 0059 LAWS@SRI-AI.ARPA AIList Digest V4 #22
C00485 00025 ∂12-Feb-86 1615 LAWS@SRI-AI.ARPA AIList Digest V4 #23
C00502 00026 ∂12-Feb-86 2041 LAWS@SRI-AI.ARPA AIList Digest V4 #25
C00527 00027 ∂12-Feb-86 2316 LAWS@SRI-AI.ARPA AIList Digest V4 #24
C00543 00028 ∂14-Feb-86 0024 LAWS@SRI-AI.ARPA AIList Digest V4 #26
C00559 00029 ∂14-Feb-86 0240 LAWS@SRI-AI.ARPA AIList Digest V4 #27
C00587 00030 ∂16-Feb-86 2310 LAWS@SRI-AI.ARPA AIList Digest V4 #28
C00607 00031 ∂17-Feb-86 0055 LAWS@SRI-AI.ARPA AIList Digest V4 #29
C00624 00032 ∂17-Feb-86 0234 LAWS@SRI-AI.ARPA AIList Digest V4 #30
C00640 00033 ∂20-Feb-86 1558 LAWS@SRI-AI.ARPA AIList Digest V4 #31
C00676 00034 ∂21-Feb-86 0101 LAWS@SRI-AI.ARPA AIList Digest V4 #32
C00703 00035 ∂21-Feb-86 1323 LAWS@SRI-AI.ARPA AIList Digest V4 #33
C00732 00036 ∂23-Feb-86 1525 LAWS@SRI-AI.ARPA AIList Digest V4 #35
C00757 00037 ∂23-Feb-86 1748 LAWS@SRI-AI.ARPA AIList Digest V4 #34
C00780 00038 ∂26-Feb-86 1512 LAWS@SRI-AI.ARPA AIList Digest V4 #36
C00798 00039 ∂27-Feb-86 0523 LAWS@SRI-AI.ARPA AIList Digest V4 #37
C00817 00040 ∂27-Feb-86 0923 LAWS@SRI-AI.ARPA AIList Digest V4 #38
C00840 00041 ∂27-Feb-86 1407 LAWS@SRI-AI.ARPA AIList Digest V4 #39
C00859 00042 ∂28-Feb-86 0102 LAWS@SRI-AI.ARPA AIList Digest V4 #40
C00878 00043 ∂28-Feb-86 1313 LAWS@SRI-AI.ARPA AIList Digest V4 #41
C00895 00044 ∂04-Mar-86 0222 LAWS@SRI-AI.ARPA AIList Digest V4 #42
C00920 00045 ∂04-Mar-86 0435 LAWS@SRI-AI.ARPA AIList Digest V4 #43
C00944 00046 ∂06-Mar-86 1244 LAWS@SRI-AI.ARPA AIList Digest V4 #44
C00971 00047 ∂06-Mar-86 1616 LAWS@SRI-AI.ARPA AIList Digest V4 #45
C00983 00048 ∂06-Mar-86 1919 LAWS@SRI-AI.ARPA AIList Digest V4 #46
C01005 00049 ∂10-Mar-86 1450 LAWS@SRI-AI.ARPA AIList Digest V4 #47
C01035 00050 ∂10-Mar-86 1800 LAWS@SRI-AI.ARPA AIList Digest V4 #48
C01061 00051 ∂10-Mar-86 2039 LAWS@SRI-AI.ARPA AIList Digest V4 #49
C01081 00052 ∂11-Mar-86 2017 LAWS@SRI-AI.ARPA AIList Digest V4 #50
C01093 00053 ∂12-Mar-86 1530 LAWS@SRI-AI.ARPA AIList Digest V4 #51
C01114 00054 ∂13-Mar-86 1446 LAWS@SRI-AI.ARPA AIList Digest V4 #52
C01133 00055 ∂13-Mar-86 1828 LAWS@SRI-AI.ARPA AIList Digest V4 #53
C01150 00056 ∂14-Mar-86 1410 LAWS@SRI-AI.ARPA AIList Digest V4 #54
C01170 00057 ∂17-Mar-86 0124 LAWS@SRI-AI.ARPA AIList Digest V4 #55
C01194 00058 ∂17-Mar-86 0304 LAWS@SRI-AI.ARPA AIList Digest V4 #56
C01207 00059 ∂17-Mar-86 0509 LAWS@SRI-AI.ARPA AIList Digest V4 #57
C01233 00060 ∂17-Mar-86 0830 LAWS@SRI-AI.ARPA AIList Digest V4 #58
C01260 00061 ∂19-Mar-86 1558 LAWS@SRI-AI.ARPA AIList Digest V4 #59
C01285 00062 ∂19-Mar-86 1932 LAWS@SRI-AI.ARPA AIList Digest V4 #60
C01309 00063 ∂20-Mar-86 2011 LAWS@SRI-AI.ARPA AIList Digest V4 #61
C01342 00064 ∂20-Mar-86 2255 LAWS@SRI-AI.ARPA AIList Digest V4 #62
C01356 00065 ∂26-Mar-86 0128 LAWS@SRI-AI.ARPA AIList Digest V4 #63
C01383 00066 ∂26-Mar-86 1427 LAWS@SRI-AI.ARPA AIList Digest V4 #64
C01396 00067 ∂02-Apr-86 0307 LAWS@SRI-AI.ARPA AIList Digest V4 #65
C01422 00068 ∂02-Apr-86 0625 LAWS@SRI-AI.ARPA AIList Digest V4 #66
C01440 00069 ∂08-Apr-86 0207 LAWS@SRI-AI.ARPA AIList Digest V4 #67
C01463 00070 ∂08-Apr-86 0410 LAWS@SRI-AI.ARPA AIList Digest V4 #68
C01482 00071 ∂08-Apr-86 0713 LAWS@SRI-AI.ARPA AIList Digest V4 #69
C01506 00072 ∂09-Apr-86 0104 LAWS@SRI-AI.ARPA AIList Digest V4 #70
C01533 00073 ∂09-Apr-86 0328 LAWS@SRI-AI.ARPA AIList Digest V4 #71
C01564 00074 ∂09-Apr-86 0550 LAWS@SRI-AI.ARPA AIList Digest V4 #72
C01581 00075 ∂09-Apr-86 0826 LAWS@SRI-AI.ARPA AIList Digest V4 #73
C01603 00076 ∂10-Apr-86 0211 LAWS@SRI-AI.ARPA AIList Digest V4 #74
C01634 00077 ∂10-Apr-86 0441 LAWS@SRI-AI.ARPA AIList Digest V4 #75
C01662 00078 ∂10-Apr-86 2132 LAWS@SRI-AI.ARPA AIList Digest V4 #76
C01689 00079 ∂11-Apr-86 0355 LAWS@SRI-AI.ARPA AIList Digest V4 #77
C01722 00080 ∂11-Apr-86 0611 LAWS@SRI-AI.ARPA AIList Digest V4 #78
C01748 00081 ∂11-Apr-86 1031 LAWS@SRI-AI.ARPA AIList Digest V4 #79
C01774 00082 ∂12-Apr-86 0109 LAWS@SRI-AI.ARPA AIList Digest V4 #80
C01801 00083 ∂12-Apr-86 0312 LAWS@SRI-AI.ARPA AIList Digest V4 #81
C01828 00084 ∂12-Apr-86 0536 LAWS@SRI-AI.ARPA AIList Digest V4 #82
C01855 00085 ∂13-Apr-86 0153 LAWS@SRI-AI.ARPA AIList Digest V4 #83
C01883 00086 ∂13-Apr-86 0350 LAWS@SRI-AI.ARPA AIList Digest V4 #84
C01910 00087 ∂13-Apr-86 0519 LAWS@SRI-AI.ARPA AIList Digest V4 #85
C01922 00088 ∂13-Apr-86 2304 LAWS@SRI-AI.ARPA AIList Digest V4 #86
C01951 00089 ∂14-Apr-86 0117 LAWS@SRI-AI.ARPA AIList Digest V4 #87
C01975 00090 ∂14-Apr-86 0330 LAWS@SRI-AI.ARPA AIList Digest V4 #88
C02007 00091 ∂14-Apr-86 2331 LAWS@SRI-AI.ARPA AIList Digest V4 #89
C02031 00092 ∂15-Apr-86 0257 LAWS@SRI-AI.ARPA AIList Digest V4 #90
C02064 00093 ∂15-Apr-86 0907 LAWS@SRI-AI.ARPA AIList Digest V4 #91
C02097 00094 ∂15-Apr-86 2313 LAWS@SRI-AI.ARPA AIList Digest V4 #92
C02118 00095 ∂18-Apr-86 0117 LAWS@SRI-AI.ARPA AIList Digest V4 #93
C02141 00096 ∂18-Apr-86 0430 LAWS@SRI-AI.ARPA AIList Digest V4 #94
C02161 00097 ∂21-Apr-86 0157 LAWS@SRI-AI.ARPA AIList Digest V4 #95
C02194 00098 ∂22-Apr-86 0111 LAWS@SRI-AI.ARPA AIList Digest V4 #96
C02219 00099 ∂22-Apr-86 0324 LAWS@SRI-AI.ARPA AIList Digest V4 #97
C02253 00100 ∂22-Apr-86 0605 LAWS@SRI-AI.ARPA AIList Digest V4 #98
C02267 00101 ∂24-Apr-86 0049 LAWS@SRI-AI.ARPA AIList Digest V4 #99
C02297 00102 ∂24-Apr-86 0310 LAWS@SRI-AI.ARPA AIList Digest V4 #100
C02324 00103 ∂26-Apr-86 0132 LAWS@SRI-AI.ARPA AIList Digest V4 #101
C02360 00104 ∂26-Apr-86 0343 LAWS@SRI-AI.ARPA AIList Digest V4 #102
C02396 00105 ∂26-Apr-86 0542 LAWS@SRI-AI.ARPA AIList Digest V4 #103
C02430 00106 ∂28-Apr-86 1309 LAWS@SRI-AI.ARPA AIList Digest V4 #104
C02456 00107 ∂29-Apr-86 0115 LAWS@SRI-AI.ARPA AIList Digest V4 #105
C02487 00108 ∂29-Apr-86 0357 LAWS@SRI-AI.ARPA AIList Digest V4 #106
C02517 00109 ∂01-May-86 0320 LAWS@SRI-AI.ARPA AIList Digest V4 #107
C02542 00110 ∂01-May-86 0513 LAWS@SRI-AI.ARPA AIList Digest V4 #108
C02558 00111 ∂02-May-86 0214 LAWS@SRI-AI.ARPA AIList Digest V4 #109
C02598 00112 ∂02-May-86 0446 LAWS@SRI-AI.ARPA AIList Digest V4 #110
C02623 00113 ∂04-May-86 0035 LAWS@SRI-AI.ARPA AIList Digest V4 #111
C02657 00114 ∂04-May-86 0219 LAWS@SRI-AI.ARPA AIList Digest V4 #112
C02682 00115 ∂05-May-86 0008 LAWS@SRI-AI.ARPA AIList Digest V4 #113
C02710 00116 ∂05-May-86 0226 LAWS@SRI-AI.ARPA AIList Digest V4 #114
C02731 00117 ∂07-May-86 0151 LAWS@SRI-AI.ARPA AIList Digest V4 #115
C02757 00118 ∂08-May-86 1417 LAWS@SRI-AI.ARPA AIList Digest V4 #116
C02788 00119 ∂08-May-86 2356 LAWS@SRI-AI.ARPA AIList Digest V4 #117
C02822 00120 ∂09-May-86 0251 LAWS@SRI-AI.ARPA AIList Digest V4 #118
C02837 00121 ∂09-May-86 0506 LAWS@SRI-AI.ARPA AIList Digest V4 #119
C02857 00122 ∂14-May-86 1451 LAWS@SRI-AI.ARPA AIList Digest V4 #122
C02886 00123 ∂14-May-86 1755 LAWS@SRI-AI.ARPA AIList Digest V4 #123
C02911 00124 ∂15-May-86 1435 LAWS@SRI-AI.ARPA AIList Digest V4 #120
C02929 00125 ∂15-May-86 1827 LAWS@SRI-AI.ARPA AIList Digest V4 #121
C02952 00126 ∂20-May-86 0132 LAWS@SRI-AI.ARPA AIList Digest V4 #124
C02973 00127 ∂20-May-86 0405 LAWS@SRI-AI.ARPA AIList Digest V4 #125
C03001 00128 ∂23-May-86 1741 LAWS@SRI-AI.ARPA AIList Digest V4 #126
C03024 00129 ∂23-May-86 2122 LAWS@SRI-AI.ARPA AIList Digest V4 #127
C03050 00130 ∂27-May-86 0212 LAWS@SRI-AI.ARPA AIList Digest V4 #128
C03073 00131 ∂27-May-86 0439 LAWS@SRI-AI.ARPA AIList Digest V4 #129
C03093 00132 ∂27-May-86 1346 LAWS@SRI-AI.ARPA AIList Digest V4 #130
C03116 00133 ∂27-May-86 1719 LAWS@SRI-AI.ARPA AIList Digest V4 #131
C03142 00134 ∂28-May-86 1319 LAWS@SRI-AI.ARPA AIList Digest V4 #132
C03166 00135 ∂28-May-86 1641 LAWS@SRI-AI.ARPA AIList Digest V4 #133
C03193 00136 ∂30-May-86 1209 LAWS@SRI-AI.ARPA AIList Digest V4 #134
C03217 00137 ∂03-Jun-86 0111 LAWS@SRI-AI.ARPA AIList Digest V4 #135
C03237 00138 ∂03-Jun-86 0325 LAWS@SRI-AI.ARPA AIList Digest V4 #136
C03266 00139 ∂03-Jun-86 0543 LAWS@SRI-AI.ARPA AIList Digest V4 #137
C03293 00140 ∂04-Jun-86 0034 LAWS@SRI-AI.ARPA AIList Digest V4 #138
C03318 00141 ∂04-Jun-86 0313 LAWS@SRI-AI.ARPA AIList Digest V4 #139
C03358 00142 ∂04-Jun-86 0548 LAWS@SRI-AI.ARPA AIList Digest V4 #140
C03391 00143 ∂04-Jun-86 2330 LAWS@SRI-AI.ARPA AIList Digest V4 #141
C03424 00144 ∂05-Jun-86 0157 LAWS@SRI-AI.ARPA AIList Digest V4 #142
C03441 00145 ∂06-Jun-86 1321 LAWS@SRI-AI.ARPA AIList Digest V4 #143
C03466 00146 ∂10-Jun-86 0025 LAWS@SRI-AI.ARPA AIList Digest V4 #144
C03502 00147 ∂10-Jun-86 0313 LAWS@SRI-AI.ARPA AIList Digest V4 #145
C03536 00148 ∂10-Jun-86 0547 LAWS@SRI-AI.ARPA AIList Digest V4 #146
C03569 00149 ∂10-Jun-86 0910 LAWS@SRI-AI.ARPA AIList Digest V4 #147
C03602 00150 ∂12-Jun-86 0201 LAWS@SRI-AI.ARPA AIList Digest V4 #148
C03619 00151 ∂16-Jun-86 0108 LAWS@SRI-AI.ARPA AIList Digest V4 #149
C03642 00152 ∂16-Jun-86 0315 LAWS@SRI-AI.ARPA AIList Digest V4 #150
C03674 00153 ∂17-Jun-86 1821 LAWS@SRI-AI.ARPA AIList Digest V4 #151
C03701 00154 ∂17-Jun-86 2129 LAWS@SRI-AI.ARPA AIList Digest V4 #152
C03732 00155 ∂18-Jun-86 0006 LAWS@SRI-AI.ARPA AIList Digest V4 #153
C03757 00156 ∂23-Jun-86 0128 LAWS@SRI-AI.ARPA AIList Digest V4 #154
C03773 00157 ∂23-Jun-86 0312 LAWS@SRI-AI.ARPA AIList Digest V4 #155
C03799 00158 ∂25-Jun-86 0116 LAWS@SRI-AI.ARPA AIList Digest V4 #156
C03823 00159 ∂25-Jun-86 0329 LAWS@SRI-AI.ARPA AIList Digest V4 #157
C03854 00160 ∂26-Jun-86 1657 LAWS@SRI-AI.ARPA AIList Digest V4 #158
C03877 00161 ∂01-Jul-86 1240 LAWS@SRI-AI.ARPA AIList Digest V4 #159
C03892 00162 ∂01-Jul-86 1558 LAWS@SRI-AI.ARPA AIList Digest V4 #160
C03919 00163 ∂07-Jul-86 1258 LAWS@SRI-AI.ARPA AIList Digest V4 #161
C03948 00164 ∂07-Jul-86 1531 LAWS@SRI-AI.ARPA AIList Digest V4 #162
C03972 00165 ∂07-Jul-86 1951 LAWS@SRI-AI.ARPA AIList Digest V4 #163
C03996 00166 ∂10-Jul-86 0152 LAWS@SRI-AI.ARPA AIList Digest V4 #165
C04027 00167 ∂10-Jul-86 0249 LAWS@SRI-AI.ARPA AIList Digest V4 #164
C04051 00168 ∂14-Jul-86 1428 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #166
C04081 00169 ∂16-Jul-86 1551 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #167
C04100 00170 ∂18-Jul-86 1531 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #168
C04124 00171 ∂18-Jul-86 2216 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #169
C04149 00172 ∂19-Jul-86 0036 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #170
C04178 00173 ∂22-Jul-86 1340 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #171
C04201 00174 ∂24-Jul-86 1402 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #172
C04224 00175 ∂24-Jul-86 1721 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #173
C04246 00176 ∂31-Jul-86 2159 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #174
C04263 00177 ∂01-Aug-86 0034 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #175
C04289 00178 ∂04-Aug-86 0059 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #176
C04312 00179 ∂09-Aug-86 0237 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #177
C04338 00180 ∂09-Aug-86 0431 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #178
C04355 00181 ∂12-Aug-86 1821 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #179
C04362 00182 ∂16-Sep-86 0515 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #180
C04390 00183 ∂17-Sep-86 1608 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #181
C04419 00184 ∂17-Sep-86 2046 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #182
C04445 00185 ∂18-Sep-86 0321 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #183
C04465 00186 ∂18-Sep-86 1518 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #184
C04486 00187 ∂18-Sep-86 1931 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #185
C04513 00188 ∂18-Sep-86 2245 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #186
C04536 00189 ∂19-Sep-86 1549 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #187
C04550 00190 ∂19-Sep-86 1909 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #188
C04579 00191 ∂19-Sep-86 2115 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #189
C04598 00192 ∂19-Sep-86 2321 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #190
C04623 00193 ∂20-Sep-86 0139 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #191
C04652 00194 ∂21-Sep-86 0022 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #192
C04674 00195 ∂21-Sep-86 0150 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #193
C04701 00196 ∂21-Sep-86 0317 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #194
C04728 00197 ∂25-Sep-86 0011 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #195
C04756 00198 ∂25-Sep-86 0243 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #196
C04787 00199 ∂25-Sep-86 0528 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #197
C04816 00200 ∂26-Sep-86 1659 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #198
C04844 00201 ∂26-Sep-86 2251 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #199
C04876 00202 ∂29-Sep-86 0011 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #200
C04894 00203 ∂29-Sep-86 0153 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #201
C04926 00204 ∂29-Sep-86 0351 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #202
C04963 00205 ∂06-Oct-86 0020 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #203
C04989 00206 ∂06-Oct-86 0210 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #204
C05019 00207 ∂06-Oct-86 0348 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #205
C05045 00208 ∂06-Oct-86 0551 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #206
C05075 00209 ∂07-Oct-86 1248 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #207
C05100 00210 ∂09-Oct-86 0301 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #208
C05134 00211 ∂09-Oct-86 0449 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #209
C05158 00212 ∂09-Oct-86 0739 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #210
C05195 00213 ∂09-Oct-86 2304 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #211
C05219 00214 ∂10-Oct-86 1438 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #212
C05236 00215 ∂14-Oct-86 0015 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #213
C05253 00216 ∂14-Oct-86 1234 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #214
C05275 00217 ∂14-Oct-86 1615 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #215
C05299 00218 ∂16-Oct-86 0008 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #216
C05315 00219 ∂16-Oct-86 0248 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #217
C05339 00220 ∂16-Oct-86 0507 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #218
C05356 00221 ∂16-Oct-86 0807 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #219
C05386 00222 ∂17-Oct-86 0045 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #220
C05421 00223 ∂17-Oct-86 0308 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #221
C05443 00224 ∂17-Oct-86 0526 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #222
C05475 00225 ∂17-Oct-86 0840 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #223
C05509 00226 ∂18-Oct-86 2246 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #224
C05534 00227 ∂19-Oct-86 0043 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #225
C05552 00228 ∂19-Oct-86 0252 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #226
C05582 00229 ∂19-Oct-86 0434 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #227
C05605 00230 ∂19-Oct-86 0624 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #228
C05629 00231 ∂23-Oct-86 0121 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #229
C05648 00232 ∂23-Oct-86 0423 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #230
C05667 00233 ∂23-Oct-86 0713 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #231
C05685 00234 ∂24-Oct-86 0205 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #232
C05708 00235 ∂24-Oct-86 0652 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #233
C05744 00236 ∂24-Oct-86 1125 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #234
C05763 00237 ∂26-Oct-86 2349 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #235
C05778 00238 ∂27-Oct-86 0145 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #236
C05815 00239 ∂27-Oct-86 0331 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #237
C05838 00240 ∂27-Oct-86 0524 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #238
C05857 00241 ∂30-Oct-86 0200 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #239
C05879 00242 ∂30-Oct-86 0420 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #240
C05898 00243 ∂30-Oct-86 0724 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #241
C05933 00244 ∂30-Oct-86 1229 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #242
C05966 00245 ∂03-Nov-86 0232 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #243
C05988 00246 ∂03-Nov-86 0424 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #244
C06012 00247 ∂05-Nov-86 0202 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #245
C06040 00248 ∂05-Nov-86 0405 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #246
C06062 00249 ∂05-Nov-86 0710 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #247
C06087 00250 ∂05-Nov-86 1055 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #248
C06123 00251 ∂05-Nov-86 1423 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #249
C06149 00252 ∂07-Nov-86 1725 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #250
C06184 00253 ∂07-Nov-86 1940 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #251
C06203 00254 ∂07-Nov-86 2215 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #252
C06224 00255 ∂08-Nov-86 0130 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #253
C06247 00256 ∂08-Nov-86 0306 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #254
C06268 00257 ∂08-Nov-86 0433 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #255
C06290 00258 ∂08-Nov-86 0624 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #256
C06320 00259 ∂08-Nov-86 0803 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #257
C06342 00260 ∂12-Nov-86 0126 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #258
C06374 00261 ∂12-Nov-86 0350 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #259
C06398 00262 ∂19-Nov-86 0039 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #260
C06423 00263 ∂19-Nov-86 0234 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #261
C06445 00264 ∂20-Nov-86 0143 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #262
C06471 00265 ∂20-Nov-86 0346 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #263
C06506 00266 ∂20-Nov-86 0527 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #264
C06524 00267 ∂24-Nov-86 0236 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #265
C06550 00268 ∂24-Nov-86 0441 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #266
C06586 00269 ∂25-Nov-86 2314 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #267
C06614 00270 ∂26-Nov-86 0131 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #268
C06649 00271 ∂26-Nov-86 0358 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #269
C06682 00272 ∂30-Nov-86 1623 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #270
C06703 00273 ∂30-Nov-86 1803 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #271
C06722 00274 ∂30-Nov-86 1954 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #272
C06756 00275 ∂01-Dec-86 2313 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #273
C06789 00276 ∂02-Dec-86 0114 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #274
C06809 00277 ∂02-Dec-86 0308 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #275
C06839 00278 ∂02-Dec-86 0450 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #276
C06860 00279 ∂04-Dec-86 0041 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #277
C06881 00280 ∂04-Dec-86 0234 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #278
C06897 00281 ∂04-Dec-86 0437 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #279
C06925 ENDMK
C⊗;
This file (AI.TXT[BB,DOC]) is currently volume 4 of the AI-LIST digest.
The digests are edited by Ken Laws from SRI. To get added to the list
send mail to AIList-REQUEST@SRI-AI; better yet use CKSUM to read this
file.
Mail your submissions to AIList@SRI-AI.
Pointers to previous volumes:
Volume 1 (#1 to #117) of AI-LIST has been archived in file AI.V1[BB,DOC].
Volume 2 (#1 to #184) of AI-LIST has been archived in file AI.V2[BB,DOC].
Volume 3 (#1 to #193) of AI-LIST has been archived in file AI.V3[BB,DOC].
The old volumes will not be kept on the disk, although they'll be available
from backup tape if necessary. Archive files are probably available online
at SRI-AI.
∂06-Jan-86 1349 LAWS@SRI-AI.ARPA AIList Digest V4 #1
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Jan 86 13:49:04 PST
Date: Mon 6 Jan 1986 11:10-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #1
To: AIList@SRI-AI
AIList Digest Monday, 6 Jan 1986 Volume 4 : Issue 1
Today's Topics:
Policy - Welcome & Technology Export Policy,
Games - Wargamers List & Othello Tournament & Computer Chess Tutor
----------------------------------------------------------------------
Date: Sun 5 Jan 86 23:21:57-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Introduction to Volume 4
Welcome to AIList'86. We went through 193 issues last year, and
a high proportion of that was interesting and perhaps even useful.
For those who haven't seen the official welcome message in the
last 2 1/2 years, I've been telling the new arrivals that the list
topics are:
Expert Systems AI Techniques
Knowledge Representation Knowledge Acquisition
Problem Solving Hierarchical Inference
Machine Learning Pattern Recognition
Analogical Reasoning Data Analysis
Cognitive Psychology Human Perception
AI Languages and Systems Machine Translation
Theorem Proving Decision Theory
Logic Programming Computer Science
Automatic Programming Information Science
I like to think of AIList as the forum for AI and pattern recognition,
although we've had precious little of the latter.
There are a number of related lists, some sparked by the success of
AIList. Prolog-Digest@SU-SCORE was here first, of course, and I thank
Chuck Restivo for the help he gave me in getting started.
Human-Nets@RUTGERS also served as a template for AIList. Recently
created lists are Soft-Eng@MIT-XX for programming languages and
man-machine interfaces; Vision-List@AIDS-UNIX for vision algorithms;
AI-Ed@SUMEX-AIM for AI in education (CAI, tutoring systems, user
modeling, cognitive learning, etc.); PARSYM@SUMEX for parallel
symbolic computing; IRList%VPI.CSNet@CSNet-Relay for information
retrieval; MetaPhilosophers%MIT-OZ@MIT-MC list for philosophy
discussions; and the Usenet net.math.symbolic and computers-and
society discussions. Discussions of workstations and individual
languages are carried on WorkS@Rutgers, SLUG@UTexas (Symbolics), and
Scheme@MIT-MC. (If someone wants to spin off other topics, such as
linguistics, seminar announcements, etc., I'll be glad to help.)
The digest goes out to a great many readers via bboards, redistribution
nodes, and Usenet forwarding. I know that places like MIT and Xerox
have hundreds of readers, but I don't have even a rough estimate of
the total readership. My direct distribution (after 738 revisions) is to
Arpanet Hosts:
ACC(BB+1), AEROSPACE(8), AFSC-SD, AIDS-UNIX, ALLEGRA@BTL, AMES-NAS,
AMES-VMSB(4), AMSAA(3), ANL-MCS, APG-3(2), ARDC(3), ARE-PN@UCL-CS,
ARI-HQ1(10), BBN(1), BBNA(BB+1), BBNCC4, BBNCCH(2), BBNCCS, BBNCCT(3),
BBNCCX, BBNCCY(2), BBN-CLXX, BBNF, BBNG(14), BBN-LABS-B, BBN-MENTOR,
BBN-META, BBN-SPCA, BBN-UNIX(9), BBN-VAX(6), B.CC@BERKELEY,
D@BERKELEY.EDU, UCBVAX.BERKELEY.EDU, UCBCAD@BERKELEY(2),
UCBCORY@BERKELEY, UCBDALI@BERKELEY(2), UCBERNIE@BERKELEY(4),
UCBESVAX@BERKELEY, UCBIC@BERKELEY, UCBLAPIS@BERKELEY, BNL44,
BRL(BB+1), BRL-VOC, C.MFENET@LLL-MFE, CECOM-1, CECOM-2,
PCO@CISL-SERVICE-MULTICS, CIT-20, CIT-HAMLET, CIT-VAX, CMU-CS-A(BB+5),
CMU-CS-G(2), CMU-RI-ISL1, COLUMBIA-20, CORNELL(BB+1),
CRYS.WISC.EDU, CRDC-VAX2, CSNet-SH, DCA-EMS(2), DCT%DDXA@UCL-CS, DDN,
HUDSON.DEC.COM(2), DEC-MARLBORO(2), Other.DEC@DECWRL(23), DMC-CRC,
DOCKMASTER(2), DREA-XX, EDN-UNIX, EDN-VAX(2), EDWARDS-2060, EGLIN-VAX,
ETL-AI, FORD-COS1, FORD-SCF1(2), FSU.MFENET@LLL-MFE(3), GE-CRD(2),
SCH-GODZILLA@SCRC-STONY-BROOK(2) GSWD-VMS(BB+1), GUNTER-ADAM,
GWUVM@MIT-MULTICS(3), HARV-10, HAWAII-EMH, HI-MULTICS(BB+1),
CSCKNP@HI-MULTICS, HOPKINS, IBM-SJ, ISIA, ISI-VAXA(10), JPL-VAX,
JPL-VLSI(7), KESTREL, LANL, LBL-CSAM, LLLASD.DECNET@LLL-CRG, LLL-CRG,
LLL-MFE(6), CMA@LLL-MFE, DMA@LLL-MFE, ORN@LLL-MFE, PPL@LLL-MFE(2),
SAI@LLL-MFE, LL-VLSI, LL-XN, LOGICON, MARYLAND, MCC-DB@MCC(2),
AI@MCC(2), CAD@MCC, PP@MCC, MIT-MC, MIT-MULTICS, ADL@MIT-MULTICS,
MIT-OZ@MIT-MC, MITRE(14), MITRE-BEDFORD, MITRE-GATEWAY(3), MOUTON,
MWCAMIS@MITRE, MWVM@MITRE, NADC(9), NBS-VMS, NCSC(3), NLM-MCS,
NOSC(BB+4), CCVAX@NOSC, NOSC-F4(BB+5), COD@NOSC(5), TETRA@NOSC,
NPRDC(BB+3), NRL-AIC, NRL-CSS, NSWC-WO(2), NTA-VAX(BB+2), NTSC-74,
NUSC, NYU, NYU-CSD2, OAK.SAINET.MFENET@LLL-MFE, MDC@OFFICE-1, OMNILAX,
ORNL-MSR(BB+1), OSLO-VAX, PAXRV-NES, PURDUE, RADC-MULTICS,
RADC-TOPS20, RAND-UNIX(BB+1), RDG.AM.UTS@UCL-CS, RIACS, RICE,
ROCHESTER(3), RUTGERS(BB+1), SAIL(BB+3), SAN.SAINET.MFENET@LLL-MFE,
SANDIA-CAD, SCRC-STONY-BROOK(5), SECKENHEIM-EMH, SIMTEL20,
SRI-AI(BB+6), SRI-CSL, SRI-KL(19), SRI-NIC(BB+1), SRI-SPAM,
SRI-TSC(3), SRN-VAX, STL-HOST1, SU-AMADEUS@SU-SCORE, SU-CSLI(BB+1),
SU-GSB-HOW(2), SUMEX(BB+3), SU-PSYCH(3), SU-SCORE(BB+8),
SU-SIERRA(BB+2), SU-SUSHI(4), SYMBOLICS(2), TKOV02.DEC@DECWRL, UCBKIM,
UCL-CS(BB+1), CAMJENNY@UCL-CS, UK.AC.EDINBURGH@UCL-CS(2),
RLGM@UCL-CS(3), UCLA-LOCUS(BB+2), UCSD, UDEL, A.CS.UIUC.EDU,
MIMSY.UMD.EDU, VAX.NR.UNINETT@NTA-VAX, VAX.RUNIT.UNIT.UNINETT@NTA-VAX(3),
USC-ECL, USC-ISI(8), USC-ISIB(BB+5), USC-ISIF(6), UTAH-20(BB+2),
UTEXAS-20, WASHINGTON(4), WHARTON-10(2), WHITNEY, WISC-AI,
WISC-CRYS(5), WISC-GUMBY, WISC-PIPE, WISC-RSCH(2), WISCVM,
WPAFB-INFO1, WPAFB-AFITA, WSMR04, WSMR06, XEROX, YALE
CSNet:
BGSU, BOSTONU(3), BRANDEIS, BROWN, BUFFALO, CLEMSON(3), COLGATE,
COLOSTATE, DEPAUL, GATECH, GERMANY, GMR(12), GTE-LABS(2), HP-BRONZE,
HP-LABS, SJRLVM1%IBM-SJ, WLVM1%IBM-SJ, IRO.UDEM.CDN%UBC,
CSKAIST%KAIST(2), LOSANGEL%IBM-SJ, LSU, NMSU(2), NORTHEASTERN(11),
OKSTATE, PITT, RPICS, SCAROLINA(3), SMU(BB+1), SPERRY-RESTON, TAMU,
SPY%TEKTRONIX, TEKCHIPS%TEKTRONIX, TEKIG5%TEKTRONIX, TEKGVS%TEKTRONIX,
TEKLDS%TEKTRONIX, TENNESSEE, CSL60%TI-CSL(BB+1), TI-EG, UTAI%TORONTO,
TUFTS, UBC, UCF-CS, UCI, UCSC, UIOWA(BB+1), ARCHEBIO%UIUC,
UIUCDCSB%UIUC, ULOWELL(2), UMASS-CS, UMN-CS, UNC, UPENN, VIRGINIA,
VPI, WWU(2), YKTVMV@IBM-SJ(7)
BITNET@WISCVM:
BOSTONU(2), BROWNVM, BUCASA, BUCKNELL, CARLETON, CBEBDA3T, CGEUGE51,
CUNYVM(2), CZHRZU1A, DB0TUI11, DBNGMD21, DBNRHRZ1(2), DBSTU1,
DDATHD21, DDATHD21, DHDURZ2(2), HNYKUN52, HNYKUN53, HWALHW5, ICNUCEVM,
IDUI1, IPACUC, ISRAEARN, NJECNVM, NSNCCVM, RYERSON(2), SBBIOVM,
SLACVM.WISC.EDU, SUCASE, UCF1VM(2), UCONNVM, UHUPVM1, UKCC(2), ULKYVX,
UMCVMB, VTVM1, WISDOM, WSUVM1
BITNET@BERKELEY:
CORNELLA(2), HLERUL5, UTCVM(2), VPIVM2, WESLYN
Mailnet@MIT-MULTICS:
Grinnell, NJIT-EIES, RPI-MTS, UMich-MTS, VANDERBILT
Usenet Paths:
bellcore@BERKELEY,
franz@BERKELEY,
ucscc@BERKELEY,
sdcsvax!sdamos!crash@NOSC,
mcvax!inria!imag!csinn@SEISMO.CSS.GOV,
dec-rhea!dec-gvaic3@DECWRL,
mcvax!enea!erix@SEISMO
mcvax!cernvax!ethz@SEISMO.CSS.GOV,
packard!ihesa@SEISMO.CSS.GOV,
mcvax!ircam@SEISMO.CSS.GOV,
mcvax!inria!lasso@SEISMO.CSS.GOV,
mcvax!cernvax!unizh@SEISMO,
tflop@SU-SHASTA,
vitesse!vec←j@S1-C
This includes something like 35 government and military sites, 15
national laboratories and research institutes, 55 companies and
nonprofit corporations, and 100 universities around the world.
That's just the direct mailings; my thanks to the many people who
have established and maintained local bboards and remailers.
About a year ago I began to worry that the international nature of the
list might violate President Reagan's directives concerning
unclassified technical (export restricted) and unclassified national
security-related (UNS-R) information. (The list goes to Canada,
Britain, Australia, West Germany, Norway, the Netherlands,
Switzerland, Japan, South Korea, etc. Readers in these countries have
also contributed to the list, of course.). I sent out some queries
and received a great deal of informed discussion, but there were no
firm precedents for determining whether we were headed for trouble.
The whole file is available for those who want it; just write to
AIList-Request@SRI-AI.ARPA. I have attempted, at least four times, to
summarize the material, but have been unable to do so without losing
the critical context of each opinion. The policy I have settled on
(subject to revision) is the following:
AIList is a public information service provided to the Arpanet
community and others though my own efforts, indirect support from
my company, and the help of numerous individuals and organizations
at other sites. Readers are advised not to submit any material
that is export controlled or classified. As moderator, I must
assume that individuals have obtained all required clearances for
their submissions to the list and for the university bboard messages
that AIList occasionally reprints. The export control laws are both
broad and vague, but material that could be published in news magazines
or publicly available scientific journals is probably safe. Scientific
information "without engineering or military significance" is always
permissible, but technical details of specific military or government-
controlled systems should not be discussed in this forum.
I would also like to point out that, in my own opinion, technology transfer
via informed discussion and incremental question/answer exchanges can be
far more effective than by flooding a channel with printed technical material.
Indeed, that is the very reason for AIList's existence -- to put people in
touch with those who can help them the most. Readers at government-supported
sites should keep in mind that any exchanges of reports or technical data
resulting from "friendly contacts" on the AIList are their own responsibility,
and that care should be taken when communicating over unsecure channels or
with unknown individuals.
For those participants who regard the above as paranoid, I apologize for
any offense. The critical decisions concerning U.S. policy and network
policy are not mine to make; I merely interpret them as best I can. I am
comfortable with the level of exchange that AIList has promoted, and
grateful for the broad participation that has made the list such a success.
-- Dr. Kenneth I. Laws
Computer Scientist
SRI International
------------------------------
Date: Fri 3 Jan 86 23:39:12-EST
From: "Daniel F. Lane" <GZT.LANE@OZ.AI.MIT.EDU>
Subject: Wargamers!
[Forwarded from the MIT bboard by Laws@SRI-AI.]
To anyone interested in wargames, or strategy games in general, myself and
SWF at OZ are starting a wargamers mailing list. Discuss the latest games
out on the market, etc. Also, as soon as we get it all organized (if?) we
will be running a game over the net called "Battle for North America" or,
"The Second Battle-Between-the-States" (whichever you prefer). But, more
about that after we get the group rolling. Send any addition requests to:
WAR-GAMES-REQUEST%MIT-OZ@MIT-MC. Thanks, Daniel Lane (GZT.LANE@OZ)
------------------------------
Date: Thu, 2 Jan 86 17:10 EST
From: Kurt Godden <godden%gmr.csnet@CSNET-RELAY.ARPA>
Subject: Computer Othello Tournament
I received an announcement in the mail and am passing it along to ailist
out of the goodness of my heart. I am not connected in any way with the
tournament (other than as an entrant):
1986 North American Computer Othello Championship Tournament
Host: CS Association at California State University, Northridge
When: February 15-16, 1986.
Where: Cal State campus in Northridge (LA area)
Sanctioned by:U.S. Othello Association
From flier: "...an eight-round, Swiss-style event with awards for the winners,
and is open to computers of all makes, models, and sizes.
Participation from programmers anywhere in the world is welcome;
entrants need not be present, as they may play via phone or submit
software and/or hardware to be run by volunteer representatives."
For detailed info you are requested to contact
1) North American Computer Othello Championship
CSUN Computer Science Association
School of Engineering, Box 31
18111 Nordhoff Street
Northridge, CA 91330
2) Brian Swift or Marc Furon (apparently pronounced ['fju ren] -KG) at
213-852-5096
Please don't contact me.
-Kurt Godden
p.s. Presumably it's necessary or at least polite to note that 'Othello'
is a registered trademark of CBS Toys.
------------------------------
Date: Mon 6 Jan 86 07:17:05-EST
From: "Fred Hapgood" <SIDNEY.G.HAPGOOD@OZ.AI.MIT.EDU>
Subject: computer chess tutor
Would anybody know who might be thinking about
tutor/annotator functions in chess computers?
The simplest imaginable computer chess tutor might work like
this: After one had played a game against it one would indicate
the moves one wished to see annotated. The machine would retrieve
the positions in that range on which you had the move. For each
it would run its evaluation routine to see what move it would
have made had it been playing. It would then score both (a) the
position resulting from your move and (b) that resulting from the
move generated by its own routine. This done, it would move on to
the next move in the series and repeat the procedure. One could
of course enter an entire score, perhaps from a newspaper, and
have the computer perform this function for the moves of both
sides.
When the list was exhausted the machine would find all the
cases in which the evaluator routine scored a difference between
(a) and (b) of more than a defined amount. It would then display
these cases either by replaying the game and stopping at the
points found, or in order of greatest disparity, i.e., biggest
blunder first. In either case display would consist of: (i) the
original position, (ii) the move actually made, and (iii) the
improvement claimed by the machine, together with a short list of
the best subsequent moves for both sides.
This is only the simplest instance of how a machine might
comment on a position or 'explain' itself.
From a marketing point of view, one virtue of these devices
is that tutors can never get too strong. A person buying a chess
computer as a opponent is likely to drop out of the market for
new versions once the machines have gotten strong enough to be a
challenge. What is being sold in an annotator is authority, and
one can never get enough of that. In fact, it is possible that as
chess computers improve, the forces driving that market will
shift from using chess computers as calculators to using them as
annotators.
------------------------------
End of AIList Digest
********************
∂08-Jan-86 1228 LAWS@SRI-AI.ARPA AIList Digest V4 #2
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 Jan 86 12:28:44 PST
Date: Wed 8 Jan 1986 08:57-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #2
To: AIList@SRI-AI
AIList Digest Wednesday, 8 Jan 1986 Volume 4 : Issue 2
Today's Topics:
Corrections - Feigenbaum's Comments & Xerox Reader Count,
Query - AI Paradigm,
Review - Stanford SDI Debate (12/19)
----------------------------------------------------------------------
Date: Sun 29 Dec 85 22:18:33-PST
From: Edward Feigenbaum <FEIGENBAUM@SUMEX-AIM.ARPA>
Subject: re the news report on my speech in the Netherlands
Saw my name in the Fri 27 Dec. 1985 AIList Digest V3 #192.
Since it's best not to let silly things propagate, let me say
here what I said (I actually said many many things; I don't understand
why those few things were picked out).
I said that among the most commercially important applications of
expert systems in the next ten years would be factory management
applications and financial service applications. (I didn't even
mention factory automation).
I said that speech understanding applications would become economically
very important. (I never mentioned speech generation.)
Best wishes for a journalistically accurate New Year (fat chance),
Ed Feigenbaum
------------------------------
Date: 6 Jan 86 16:22 PST
From: Newman.pasa@Xerox.ARPA
Subject: Reader Count
I don't know if you want to post this to the net or not, but in the
interest of accuracy, Xerox has approximately 248 readers of the AIList.
>>Dave
------------------------------
Date: Tuesday, 7 January 1986 02:19:31 EST
From: Duvvuru.Sriram@cive.ri.cmu.edu
Subject: AI Paradigm
I have seen the word "AI Paradigm" in several papers/reports. My dictionary
[Random House] says that a Paradigm is either an example or a model. Is
there any other meaning to it or is it just a better word for "example"?
Sriram
------------------------------
Date: Tue, 31 Dec 85 11:55:37 pst
From: jrisberg@aids-unix (Jeff Risberg)
Subject: Stanford SDI debate (12/19) summary
The following is a somewhat long summary of the technical debate on SDI
entitled "SDI: How Feasible, How Useful, How Robust?" that was held at
Stanford on December 19th. Since this debate was announced on AILIST,
we felt that readers would be interested in this summary.
[I was reluctant to permit the initial announcement and I am reluctant
to permit the summary. I have decided to forward them because SDI
may well involve major funding in the area of AI. Please restrict any
discussion in AIList to the areas of AI, pattern recognition, or the
feasibility of distributed decision making. Political discussions
would be more appropriate on Arms-D@MIT-MC, Risks@SRI-NIC, or
perhaps Space@MIT-MC. -- KIL]
The panelists at the debate were:
Advocates:
Professor Richard Lipton, Professor of Computer Science at Princeton
University, current member of SDIO's Panel on Computing and Support of Battle
Management.
Major Simon Peter Worden, the Special Assistant to the Director of the SDIO
and Technical Advisor to the Nuclear and Space Arms Talk with the USSR
in Geneva.
Opponents:
Dr. Richard L. Garwin, IBM Fellow and Adjunct Professor of Physics at
Columbia University, Physicist and Defense Consultant.
Professor David Parnas, Lansdown Professor of Computer Science at the
University of Victoria, former member of the SDI Organization's
Panel on Computing and Support of Battle Management.
Dr. Goldberger, President of CalTech, served as the moderator of the
discussion. He presented a bit of history relating to the subject of
defensive warfare and then allowed the panelists to speak. There are
certainly historical precedents for defensive systems, in fact, each US
leader since the 1950's has sought a defense. SDI is simply the largest
scale and most visible concept to date.
Because of the complexity of the issue, a question like "can it work?"
can only be answered by determining 'what does "can" mean?', 'what does
"it" mean?', and 'what does "work" mean?'. There have been various
justifications proposed for SDI, and the technical and political
community has raised numerous questions.
The format of the debate consisted of three sections: during the first
section, each speaker was allowed 20 minutes with which to present his
case; following that, each speaker had a 5-minute rebuttal period;
finally, the audience was allowed to ask questions via screened 3 x 5
index cards.
Major Worden spoke first and discussed some positive aspects of SDI. In
his view, the principal justification is for Arms Control. In that
view, a major goal is that the SDI system have a lower marginal cost
than that of building additional offensive systems. Survivability is
another goal.
He said that the object of SDIO is to establish the feasibility of the
system, but not to build it. Similarly, it may not involve space
weapons, although most of the current concepts include a space segment.
We would also like to get the Soviets to admit their own work in such
systems. They regularly deny such work, but when we show them ariel
photos of their high-power lasers, they say, "Oh, we do have laser
research for medical purposes".
The numerical aspects of a Defensive Reliant Deterance are that each
layer of defense drives the number of offensive warheads needed up
further. There are a series of layers, with each layer consisting of
sensors, weapons, and battle management systems. He showed some of the
standard slides of this design. The concept is "Proliferated and
Distributed". They are planning for the late 1990's.
The key issue as he sees it is countermeasures, such as the fast burn
booster. There are three types of threats and five types of
countermeasures. He expected that countermeasures will develop much as
aircraft gained shielding, speed, and proliferation countermeasures
after WW I.
He gave a cost context of the project in comparison to the cost of
insurance in the private sector. The cost of insurance is over $300
billion/year, while the SDI work is currently costing $1.5 billion.
Major Worden admitted that President Reagan caught everyone off guard
with his speech about SDI two years ago.
Dr. Garwin spoke next. He recommended that we think carefully about
just what are the goals, costs, and likely Soviet Response to SDI. He
said that the Scowcroft Commission reported that U.S. security could be
maintained without SDI. His opinion is that while SDI has been proposed
to replace deterrance, it is really simply another form of deterrance.
He is concerned with the layered approach of the system in that there is
catastrophic failure if one layer does not do its job. For example, the
design of each layer assumes that the prior one does its job in reducing
the number of incoming objects.
Examples of potential areas of failure were given: space mines could
easily knock out any space segment units and midcourse intercept could
be overwhelmed by large numbers of decoys. Dr. Garwin feels that the
systems needed for SDI can not be built under the ABM treaty.
There has been a progression of goals, and in effect "replace deterance"
has become "strengthen deterance".
He closed by describing his view of a viable strategic balance, which
would be to limit each side to 1000 warheads, deployed on small
missiles, small subs, and cruise missiles, with no counterforce threats
against strategic targets. Preserve the ADM treaty. BMD research may
be continued, in order to confirm that there is no threat to the system.
Dr. Lipton was the next speaker. He joined the technical panel of SDIO
last summer. (An interesting point is that Dr. Lipton is a former
student of Dr. Parnas.) His major focus was in the importance of the
de-centralization of software. The Fletcher panel design was centralized,
with software in charge of everything. "Is it possible to build a system
without these problems?" Discussion of feasibility must encompass all
design possibilities, and Lipton stressed the merits of a decentralized
design.
He led into this by an analogy with the banking industry. Banking works
because it is a large collection of loosely organized components.
In the SDI example, he refered to large numbers of satellite groups
handling independent battle management functions. Fault tolerance would
provide reliability, like the concept of the strategic weapons triad.
He argued that these seperate groups would be testable, by putting a few
into orbit and shooting missiles at them.
The false alarm problem could be controlled by activating different
numbers of systems. Coordination problems were raised by the Fletcher
panel toward a goal of conserving "bullets". Dr. Lipton's studies
indicated that the shot overhead of low coordination is not that high.
Dr. Parnas spoke last. He had good points in his speech, but had a
problem with becoming quite caustic in his remarks about the SDIO
members. He said that (loosely quoted) "I used to feel that arms
control people are guilty of wishful thinking, but I have now seen a
whole new standard." His major complaint against SDI is that SDI forces
us to trust the system; if SDI need not be perfect, it must at least be
trustworthy, and he feels that this is not possible.
Conditions for validation include: mathematical analysis,
exhaustive case analysis,
or prolonged, realistic testing.
Even after one or more of these conditions have been met, the system
must still be operated under controlled conditions.
The validation of software is inherently different from the standard
engineering problems such as bridge design. Something different about
software. It is made up of discrete, rather than continuous functions.
Thus design principles such as building for twice the weight do not
truly apply, but instead, the number of discrete cases must be
examined, along with thorough testing. Even after a number of years of
use, bugs may still be found. True testing requires thousands of years.
For most software, we can allow unreliable software, as long as we trust
it. For SDI, we cannot.
He doesn't believe that de-centralization provides added trustworthiness
to the system. He stated that he never took the Fletcher design
seriously in the first place, feeling it that was no more than a rough
sizing of the problem. There are a series of myths around
de-centralization.
Dr. Parnas' final point is that SDI is not a limit of computers, but of
human beings.
The rebuttals were then held. Major Worden questioned the meaning of
deterence and then mentioned some possible alternatives to SDI:
automatic launch under attack, preventive attacks, and bombs under U.S.
cities. He indicated that Dr. Garwin had shown only that he could design
a system that SDIO wouldn't buy. His final comment re-iterated the
linkage of SDI to the arms control process.
Dr. Garwin (and the others) kept mentioning the Scowcroft report which
produced possible defensive measures other than SDI. Dr. Garwin pointed
out that Dr. Lipton had only found that the system proposed by Fletcher
might not work, but that Lipton believed others might. In any case, so
long as Soviets can deliver by other means (cruise missiles), we will
continue to need deterance.
Dr. Lipton restated his belief in the need for independent
systems. He recognized that nothing is perfect, that even computers
are not reliable, but they are used on a daily basis. The use of
independent battle stations would stress the sensors, but he argued
that teraflops would alleviate the need for independent views.
Dr. Parnas again made a couple of inappropriate shots..."I've
been to a lot of Mickey Mouse meetings, but the ones sponsored by SDI
had the biggest ears and biggest nose I've ever seen." He thinks that
the idea of separate systems does not remove the size or complexity of
what is needed; dividing 10 million lines of code into small modules of
1000 lines does still not ensure error-free code. People do not write
independent code.
Questions raised from the floor asked about different types of lasers,
the time to phase-in to SDI, and about the non-ICBM threat. Worden
replied that cruise missiles are not strategic weapons because of their
flight time, and that smuggling bombs into the US would not be a realistic
approach for a Soviet leader to taken.
The speakers then made closing comments:
Dr. Garwin said that we currently have a real opportunity for arms
reduction. This would be much more survivable than continued escalation
and research into defensive weaponry. He feels that both sides should
abandon defense efforts. Control of nuclear proliferation is essential.
Major Worden agreed with these points of Dr. Garwin, but said that it is
necessary and vital to carry forward a defensive program within the ABM
treaty to provide a different kind of security.
Dr. Parnas said that in software, the engineering term of "tolerance"
depends on continuity. "Almost right" does not make sense in the
context of SDI. He fears espionage that would result in someone getting
a copy of the software. Reasons for not going ahead with SDI anyway
include the lost opportunity for other projects, low quality of results,
and weakening of the strategic position.
Dr. Lipton said that if the independent segments of the SDI system do
not interact, the code is not vulnerable. He pointed out that there
are simple systems, such as elevators, that we do trust.
Dr. Goldberger then made some closing comments. He said that strategic
defense and arms control must be approached seriously. The laws of
physics are immune to political views and we are currently at a critical
political point. A decision to push forward defensively without a
reduction in offense would be a mistake. SDI has been proposed as part
of current moves toward lowering threat of destruction, yet it is
difficult, with verification problems, and major risks. He hopes that
the human spirit will prevail in the decisions that must be made.
In summary, the debate was quite interesting, although inconclusive if
judged in a strict manner. We were most surprised that software
technical details were hardly mentioned, and that political and non
computer technology issues were the focus of the discussion. Dr. Parnas
and Dr. Lipton made several comments against each other, which detracted
from the technical discussion. It didn't appear that Dr. Lipton was
overly familiar with the SDI problems; he continually talked in
generalities, with few facts with which to back up his statements. Dr.
Garwin and Major Worden were much more prepared in their talks and
didn't take any cheap shots with which to score points with the
audience.
The comments above are strictly our personal opinions and not
representative of any organization.
Jeff Risberg (jrisberg@aids-unix)
Susan Rosenbaum (susan@aids-unix)
------------------------------
End of AIList Digest
********************
∂12-Jan-86 0022 LAWS@SRI-AI.ARPA AIList Digest V4 #3
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Jan 86 00:22:44 PST
Date: Sat 11 Jan 1986 22:13-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #3
To: AIList@SRI-AI
AIList Digest Sunday, 12 Jan 1986 Volume 4 : Issue 3
Today's Topics:
Bindings - AI-Related Lists,
Definition - Paradigm,
Logic - New CSLI Reports,
Reviews - Spang Robinson Report 2/1 &
Rational Agency Seminars (CSLI)
----------------------------------------------------------------------
Date: Fri 10 Jan 86 12:18:09-PST
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: AI-Related Lists
[...]
To add to your list of AIList-related lists, Info-1100@SUMEX and
Bug-1100@SUMEX are DL's concerning the Xerox 1100 series lisp machines
and Interlisp, and Info-TI-Explorer@SUMEX and Bug-TI-Explorer@SUMEX are
DL's concerning the TI Explorers and associated software.
--Christopher
------------------------------
Date: Wed, 8 Jan 86 16:38:34 EST
From: Bruce Nevin <bnevin@bbncch.ARPA>
Subject: paradigm
The term paradigm was specialized in philosophy of science by Thomas Kuhn in
his 1965(?) book ←The←Structure←of←Scientific←Revolutions and subsequent
works. I would question whether AI is a mature enough
field to have a paradigm in the sense that Kuhn intends for a mature science.
Instead, there appears to be a fair selection of more or less divergent
examples/models/agendas for each area of investigation. Many of these are
associated with the more prominent investigators in AI.
Bruce Nevin
bn@bbncch.arpa
BBN Communications
33 Moulton Street
Cambridge, MA 02238
(617) 497-3992
[Disclaimer: my opinions may reflect those of many, but no one else
need take responsibility for them, including my employer.]
------------------------------
Date: Wed, 8 Jan 1986 19:37 EST
From: MINSKY%OZ.AI.MIT.EDU@MC.LCS.MIT.EDU
Subject: AIList Digest V4 #2
about "paradigm" -- the dictionary is out of date because this word
now almost universally refers to the notion in Thomas Kuhn's
"Structure of Scientific Revolutions." It seems to mean powerful and
influential idea, or something.
------------------------------
Date: Thu, 9 Jan 86 11:04:40 GMT
From: Mmaccall%cs.ucl.ac.uk@cs.ucl.ac.uk
Subject: Re: AI Paradigm
An approximate meaning for the word `paradigm' is `template'.
Gordon Joly
gcj%qmc-ori@ucl-cs.arpa
------------------------------
Date: Fri, 10 Jan 86 16:53:46 GMT
From: Mmaccall%cs.ucl.ac.uk@cs.ucl.ac.uk
Subject: Re: AI Paradigm
As an afterthought. The place where I first saw the term "paradigm"
was in "Games People Play" by Eric Berne. Here, he has a model of the
(transactional) relationship between two people, with three states of
parent-adult-child. They are then put side by side with the parent above
adult and the adult above child, each being represented by a circle. Lines
are drawn to indicate which relationships are active in a given "game".
The Chambers 20th Century Dictionary, as well as the Random House, gives
the notion of "side by side". I hope this has a meaning for the "AI Paradigm"!
Gordon Joly,
gcj%qmc-ori@ucl-cs.arpa
------------------------------
Date: Thu 9 Jan 86 12:09:33-PST
From: Wilkins <WILKINS@SRI-AI.ARPA>
Subject: Paradigm
Your dictionary is correct about "paradigm". This word has been used
extensively in the Ai literature in an incorrect way. People incorrectly
use it to mean "methodology" or "school of thought" or some such.
David
------------------------------
Date: Thu 9 Jan 86 15:29:34-PST
From: Michael Walker <WALKER@SUMEX-AIM.ARPA>
Subject: ai paradigm?
If you have a paradigm, there's always a chance that you'll get a
paradigm shift, in which case people will fund your research for the next
20 years. On the other hand, if you say your example shifts, they'll think
you're fudging your data.
Mike
------------------------------
Date: Wed 8 Jan 86 16:53:32-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: New CSLI Reports on Logic
NEW CSLI REPORTS
Report No. CSLI-85-41, ``Possible-world Semantics for Autoepistemic
Logic'' by Robert C. Moore and Report No. CSLI-85-42, ``Deduction
with Many-Sorted Rewrite'' by Jose Meseguer and Joseph A. Goguen, have
just been published. These reports may be obtained by writing to
Trudy Vizmanos, CSLI, Ventura Hall, Stanford, CA 94305 or
Trudy@SU-CSLI.
------------------------------
Date: Fri, 10 Jan 86 17:28:42 cst
From: Laurence Leff <leff%smu.csnet@CSNET-RELAY.ARPA>
Subject: Spang Robinson Report, Volume 2 No 1
Summary of Spang Robinson Report, Volume 2 Number 1, January 1986
featuring AI Hardware
Vendors state that the biggest problem in marketing AI hardware
is educating both internal people and the market place.
An interview with a gentleman who evaluated AI type machines for use
in developing software for silicon compilation research at Philips
Labs.
Discussion of various ways to enhance IBM PC's for AI (or other
development needs) and the use of the Macintosh and Commodore's Amiga
for AI research.
C. J. Petrie of MCC described a system to parse text from a "how to"
book into rules.
Interview with Dag Tellefsen of Glenwood Management, a venture
capitalist. They have funded Natural Language Products and AION.
Kurzweil Applied Intelligence, that develops voice recognition hardware,
has signed a joint marketing agreement with FutureNet which supplies
electronic engineering work stations.
Reasoning Systems has signed an agreement with Lockheed Missiles and
Space Corporation to develop knowledge based systems for
communications. (Reasoning Systems is involved with the commercialization
of some of the techniques from the University of Southern California work
in automating software development. See the IEEE Transactions on
Software Engineering November 1985 Special Issue on AI and Software
Engineering for more info.)
"Logicware Inc. and Releations Ltd., both in Canada, have signed a
long-term agreement to design an Artificial Intelligence language
leading to a computer system which will emulate the thinking process
of the human brain. It will be the first AI language designed for
vector-processing by a super computer."
Composition Systems has released two Artificial Intelligence kit that
links VAX Lisp with such DEC product as FMS, RDB, GKS and DECNET."
Review of the IEEE Computer Society Second Conference on Artificial
Intelligence.
------------------------------
Date: Wed 8 Jan 86 16:53:32-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Review - Rational Agency Seminars (CSLI)
[Excerpted from the CSLI Newsletter by Laws@SRI-AI.]
RATIONAL AGENCY GROUP
Summary of Fall 1985 Work
The fall-quarter meetings of the Rational Agency Group (alias
RatAg) have focused on the question: what must the architecture of a
rational agent with serious resource limitations look like? Our
attempts to get at answers to this question have been of two kinds.
One approach has been to consider problems in providing a coherent
account of human rationality. Specifically, we have discussed a
number of philosophically motivated puzzles, such as the case of the
Double Pinball Machine, and the problem of the Strategic Bomber,
presented in a series of papers by Michael Bratman. The second
approach we have taken has been to do so-called robot psychology.
Here, we have examined existing AI planning systems, such as the PRS
system of Mike Georgeff and Amy Lansky, in an attempt to determine
whether, and, if so, how these systems embody principles of rationality.
Both approaches have led to the consideration of similar issues:
1) What primitive components must there be in an account of
rationality? From a philosophical perspective, this is
equivalent to asking what the set of primitive mental states
must be to describe human rationality; from an AI perspective,
this is equivalent to asking what the set of primitive mental
operators must be to build an artificial agent who behaves
rationally. We have agreed that the philospher's traditional
2-parameter model, containing just ``beliefs'' and ``desires'',
is insufficient; we have further agreed that adding just a third
parameter, say ``intentions'', is still not enough. We are
still considering whether a 4-parameter model, which includes a
parameter we have sometimes called ``operant desires'', is
sufficient. These so-called operant desires are medial between
intentions and desires in that, like the former (but not the
latter) they control behavior in a rational agent, but like the
latter (and not the former) they need not be mutually consistent
to satisfy the demands of rationality. The term ``goal'', we
discovered in passing, has been used at times to mean
intentions, at times desires, at times operant desires, and at
times other things; we have consequently banished it from our
collective lexicon.
2) What are ``plans'', and how do they fit into a theory of
rationality? Can they be reduced to some configuration of
other, primitive mental states, or must they also be introduced
as a primitive?
3) What are the combinatorial properties of these primitive
components within a theory of rationality, i.e., how are they
interrelated and how do they affect or control action? We have
considered, e.g., whether a rational agent can intend something
without believing it will happen, or not intend something she
believes will inevitably happen. One set of answers to these
questions that we have considered has come from the theory of
plans and action being developed by Michael Bratman. Another
set has come come from work that Phil Cohen has been doing with
Hector Levesque, which involves explaining speech acts as a
consequence of rationality. These two theories diverge on many
points: Cohen and Levesque, for instance, are committed to the
view that if a rational agent believes something to be inevitable,
he also intends it; Bratman takes the opposite view. In recent
meetings, interesting questions have arisen about whether there
can be beliefs about the future that are `not' beliefs that
something will inevitably happen, and, if so, whether
concomitant intentions are guaranteed in a rational agent.
The RatAg group intends to begin the new quarter by considering how
Cohen and Levesque's theory can handle the philosphical problems
discussed in Bratman's work. We will also be discussing the work of
Hector-Neri Castaneda in part to explore the utility of Castaneda's
distinction between propositions and practitions for our work on
intention, belief and practical rationality. Professor Castaneda will
be giving a CSLI colloquium in the spring.
RatAg participants this quarter have been Michael Bratman (project
leader), Phil Cohen, Todd Davies, Mike Georgeff, David Israel, Kurt
Konolige, Amy Lansky, and Martha Pollack. --Martha Pollack
------------------------------
End of AIList Digest
********************
∂12-Jan-86 0225 LAWS@SRI-AI.ARPA AIList Digest V4 #4
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Jan 86 02:25:37 PST
Date: Sat 11 Jan 1986 22:28-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #4
To: AIList@SRI-AI
AIList Digest Sunday, 12 Jan 1986 Volume 4 : Issue 4
Today's Topics:
Seminars - Organization of Semantic Knowledge Systems (MIT) &
LISP architectures (NASA Ames) &
Computational Networks in Silicon and Biology (PARC),
Course - Values, Technology, and Society (SU) &
Highly Parallel Architectures for AI (UPenn),
Conference - 3rd Symposium on Logic Programming
----------------------------------------------------------------------
Date: Sun, 5 Jan 86 03:44:31 EST
From: "Steven A. Swernofsky" <SASW@MC.LCS.MIT.EDU>
Subject: Seminar - Categorical Organization of Semantic Knowledge Systems (MIT)
Monday 2, December 4: 00-6:00pm Room: E25-117
HARVARD UNIVERSITY-MIT DIVISION OF HEALTH SCIENCES AND TECHNOLOGY
"The Categorical Organization of Semantic Knowledge Systems"
Elizabeth K. Warrington
Professor of Neurology
The National Hospitals for Nervous Diseases
Queen Square, London
Patients with cerebral lesions provide an important source of evidence
about the organization of semantic systems. Striking instances of the
selective preservation and selective impairment in the comprehension
of particular categories of verbal and visual stimuli have long been
reported in the neurological literature and more recently such
dissociations have been investigated and assessed using experimental
methods. The issue of modality specificity will be discussed and it
will be argued that there are at least partially independent systems
that subserve verbal and visual semantics. Evidence for both broad
category specific impairments, such as knowledge of concrete and
abstract concepts, and more fine grain category impairments such as
knowledge of animate and inanimate objects will be reviewed. It will
be argued that there are modality specific semantic systems and that
these are categorised in their organization.
Host: Lucia Vaina
------------------------------
Date: Fri, 10 Jan 86 07:58:25 pst
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Seminar - LISP architectures (NASA Ames)
National Aeronautics and Space Administration
Ames Research Center
SEMINAR ANNOUNCEMENT
Computational Research Branch
SPEAKER: Raymond S. Lim
Computational Research Branch
TOPIC: LISP Machine Architectures of MIT CADR, Symbolics 3600, & TI Explorer
ABSTRACT: Common LISP is becoming a standard, and MULTI-LISP is
contemplating for parallel LISP Processing. A modern LISP machine is a
conventional virtual memory, Von Neuman machine with addded hardware to
support runtime data type checking and incremental garbage collection. This
presentation will discuss the architecture issue of LISP machine, starting
from the MIT CADR.
DATE: 23 Jan 1986 TIME: 9:30-11:00 BLDG: 233 ROOM: 172
POINT OF CONTACT: Becky Getz PHONE NUMBER: (415)-694-5197
VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18. See map
below. Do not use the Navy Main Gate.
Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance. Submit requests to the point of
contact indicated above. Non-citizens must register at the Visitor
Reception Building. Permanent Residents are required to show Alien
Registration Card at the time of registration.
------------------------------
Date: 10 Jan 86 14:27:59 PST (Friday)
From: Kluger.osbunorth@Xerox.ARPA
Reply-to: Kluger.osbunorth@Xerox.ARPA
Subject: Seminar - Computational Networks in Silicon and Biology (PARC)
Xerox Palo Alto Research Center Forum
Thursday, January 16, 1986
4:00 pm, PARC Auditorium
J.J. Hopfield
Divisions of Chemistry and Biology
Caltech
and
AT&T Bell Laboratories
will speak on
Computational Networks in Silicon and Biology
The brain as a piece of computer hardware violates most of the sensible
design criteria for good computers, yet manages to be extremely
effective. We investigate the kinds of behavior which circuits built in
a neuronal fashion--emphasizing large connectivity, large size, analog
response, and self-timed--naturally have.
The collective properties of such systems lead naturally to the
behaviors needed for associative memory, or pattern recognition, error
decoding, visual information processing and many complex optimization
problems.
At the same time, the circuits are relatively robust (fail soft), like
their biological relatives. Such circuits may be of use as high density
associative memories and as signal processors. The effectiveness of
biological computation may in part result from the use of the collective
decision capabilities of neural networks.
This Forum is OPEN. All are invited.
Host: Larry Kluger (Information Systems Division, 496-6575)
Refreshments will be served at 3:45 pm
Visitors: Welcome! The PARC Auditorium is located at 3333 Coyote Hill
Road. The street is between Page Mill Road (west of Foothill) and
Hillview Avenue, in the Stanford Research Park, Palo Alto. Enter the
building through the *auditorium's* entrance, at the upper level of the
building.
------------------------------
Date: 03 Jan 86 1404 PST
From: John McCarthy <JMC@SU-AI.ARPA>
Subject: Course - Values, Technology, and Society (SU)
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
The following course will be given by John McCarthy in Winter 1986 in
the Values, Technology and Society program. As will be noticed from
the description, it will emphasize opportunities rather than problems.
It will meet 14:15-15:30 Tuesdays and Thursdays
in room 202 History corner (bldg 200).
Technological Possibilities for enhancing man
This course surveys the technological possibilities for increasing
human capability and real wealth. It is oriented toward what people will
want rather than around what we might think is good for them. Some of the
improvements discussed are in the direction of (1) making housework
trivial (2) making government responsive (3) increasing the ability of one
person to build an object like a car, airplane or house to suit him
without organizing others (4) allowing groups to live as they prefer less
hindered by general social laws and customs. We will emphasize computer
and information technology and ask what will be genuinely useful about
computers in the home and not just faddish or flashy. To what extent are
futurists and science fiction writers given to systematic error? Can we
envisage advances as important as electricity, telephones, running water,
inside toilets?
The second topic concerns the social factors that determine the
rate of scientific and technological progress. Why was scientific
advance a rare event until Galileo? Why didn't non-Western cultures
break through into the era of organized scientific and technological
progress and why did it take Western culture so long? Why isn't the
rate of progress faster today? As examples, we shall inquire into
the obstacles that made cellular telephone systems and electronic
funds transfer take so long.
------------------------------
Date: Wed, 8 Jan 86 16:33 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Course - Highly Parallel Architectures for AI (UPenn)
From: Lokendra Shastri <Shastri@UPenn> on Wed 8 Jan 1986 at 15:44, 45 lines
COURSE ANNOUNCEMENT
CIS704 Highly parallel architectures for Artificial Intelligence
PREREQUISITES: This is an advanced course in artificial intelligence. It
will be assumed that the participants are familiar with basic issues in AI.
DESCRIPTION: There is a growing interest in highly interconnected networks
of very simple processing elements. These networks are referred to as
Connectionist Networks and are playing an increasingly important role in
artificial intelligence and cognitive science.
This course is intended to discuss the motivation behind pursuing
"connectionism" and to survey the state of current research in this area. We
will review connectionist models of language understanding, parsing,
knowledge representation, limited inference, and learning, and compare the
connectionist approach to traditional AI approaches.
TEXTS: None. A reading list will be provided.
ASSIGNMENTS: Students will be expected to prepare a presentation of (or lead
a discussion on) a paper on the reading list. There will be two or three
assignments and a term paper.
PLACE: TB 309. M, W 4:30-6:00
------------------------------
Date: Mon, 6 Jan 86 20:33:58 MST
From: keller@utah-cs.arpa (Bob Keller)
Subject: Conference - 3rd Symposium on Logic Programming
[Forwarded from the Prolog Digest by Laws@SRI-AI.]
'86 SLP
Call for Papers
Third Symposium on Logic Programming
Sponsored by the IEEE Computer Society
September 21-25, 1986
Westin Hotel Utah
Salt Lake City, UT
The conference solicits papers on all areas of logic programming, including,
but not confined to:
Applications of logic programming
Computer architectures for logic programming
Databases and logic programming
Logic programming and other language forms
New language features
Logic programming systems and implementation
Parallel logic programming models
Performance
Theory
Please submit full papers, indicating accomplishments of substance and novelty,
and including appropriate citations of related work. The suggested page limit
is 25 double-spaced pages. Send eight copies of your manuscript no later than
15 March 1986 to:
Robert M. Keller
SLP '86 Program Chairperson
Department of Computer Science
University of Utah
Salt Lake City, UT 84112
Acceptances will be mailed by 30 April 1986. Camera-ready copy will be due by
30 June 1986.
Conference Chairperson Exhibits Chairperson
Gary Lindstrom, University of Utah Ross Overbeek, Argonne National Lab.
Tutorials Chairperson Local Arrangements Chairperson
George Luger, University of New Mexico Thomas C. Henderson, University of Utah
Program Committee
Francois Bancilhon, MCC William Kornfeld, Quintus Systems
John Conery, University of Oregon Gary Lindstrom, University of Utah
Al Despain, U.C. Berkeley George Luger, University of New Mexico
Herve Gallaire, ECRC, Munich Rikio Onai, ICOT/NTT, Tokyo
Seif Haridi, SICS, Sweden Ross Overbeek, Argonne National Lab.
Lynette Hirschman, SDC, Paoli Mark Stickel, SRI International
Peter Kogge, IBM, Owego Sten Ake Tarnlund, Uppsala University
------------------------------
End of AIList Digest
********************
∂12-Jan-86 0425 LAWS@SRI-AI.ARPA AIList Digest V4 #5
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Jan 86 04:25:29 PST
Date: Sat 11 Jan 1986 22:36-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #5
To: AIList@SRI-AI
AIList Digest Sunday, 12 Jan 1986 Volume 4 : Issue 5
Today's Topics:
Conferences - Intelligent Systems Symposium &
Workshop on AI for Generic Avionics &
Uncertainty and AI Workshop &
User-System Interfaces Workshop &
IFIP Conference on Knowledge and Data &
2nd Expert Systems in Government, Re-Revised Version
----------------------------------------------------------------------
Date: 6 January 1986 1412-EST
From: Peter Andrews@A.CS.CMU.EDU
Subject: Conference - Intelligent Systems Symposium
An International Symposium on Methodologies for Intelligent
Systems (ISMIS'86) will be held October 23-25, 1986 in Knoxville,
Tennessee. Papers are solicited in the following areas:
(1) Expert Systems
(2) Knowledge Representation
(3) Learning and Adaptive Systems
(4) Intelligent Databases
(5) Approximate Reasoning
(6) Logics for Artificial Intelligence
Papers will be due on March 1, 1986, and papers which are
accepted will be published in the proceedings of the symposium. A
copy of the Call for Papers is posted on my office door (WEH 7216),
and you can get a personal copy by sending a message to Zbigniew Ras
(ras%tennessee.csnet@CSNET-RELAY.ARPA).
------------------------------
Date: Tue, 7 Jan 86 17:52:17 est
From: Scott C McKay <scm%gitpyr%gatech.csnet@CSNET-RELAY.ARPA>
Subject: Conference - Workshop on AI for Generic Avionics
AVIONICS LABORATORY WORKSHOP
ON
ARTIFICIAL INTELLIGENCE FOR GENERIC AVIONICS
Georgia Tech Research Institute
Atlanta, Georgia
March 26-28, 1986
The Avionics Laboratory, located within the Air Force Wright
Aeronautical Laboratories at Wright Patterson Air Force Base, is
the primary organization responsible for planning and executing
the Air Force basic research, exploratory and advanced
development programs for aerospace avionics. A current major
focus of that program is to explore the applicability of
artificial intelligence to many functional avionics domains. The
results have been very encouraging and we are convinced that AI
will have significant future utility in aerospace vehicles.
In order to plan for orderly, timely and expanded developments in
AI, the Avionics Laboratory will be conducting a Workshop on
Artificial Intelligence for Generic Avionics. The overall
objective is to identify the "key basic research issues" that
constrain the future expanded applicability of artificial
intelligence technology to avionics applications and to outline
what research should be pursued to remove the constraints. The
workshop will be held at Georgia Tech Research Institute, Georgia
Institute of Technology, Atlanta, Georgia on March 26-28, 1986.
The workshop is planned to be an intensive 3-day work session
involving 35-40 (maximum) of the best researchers in the field.
Attendance will be by invitation only.
If you feel you could contribute significantly to the objectives
of the workshop and are interested in attending, please contact
either of the following by 20 Jan 86: Lawrence E. Porter (513)
255-4415, AFWAL/GLXRA, Wright Patterson AFB, OH 45433-6543 or
Michael Noviskey (513) 255-2713, same address.
The mission of the Avionics Laboratory is broad and includes the
primary areas of navigation, surveillance, reconnaissance,
electromagnetic warfare, fire control, weapon delivery,
communications, system architecture, information and signal
processing and control, subsystem integration and supporting
electronics, and software and electromagnetic device research and
development. This mission spans the spectrum from basic research
to advanced development. The emphasis of this workshop is being
placed on the former. I encourage you to plan to attend the
workshop and participate in a stimulating AI basic research
exchange with your peers. I can assure you that the results will
have a direct impact on future investment in AI basic research.
Lawrence E. Porter
Chairperson
Artificial Intelligence Planning
Avionics Laboratory
------------------------------
Date: Tue, 7 Jan 86 18:29:34 pst
From: gluck@SU-PSYCH (Mark Gluck)
Subject: Conference - Uncertainty and AI Workshop
CALL FOR PARTICIPATION
Second Workshop on: "Uncertainty in Artificial Intelligence"
Philadelphia, PA. August 9-11, 1986 (preceeding AAAI conf.)
Sponsored by: AAAI and RCA
This workshop is a follow-up to the successful workshop in L.A.,
August 1985. Its subject is reasoning under uncertainty and
representing uncertain information. The emphasis this year is on real
applications, although papers on theory are also welcome. The
workshop provides an opportunity for those interested in uncertainty
in AI to present their ideas and participate in the discussions. Also
panel discussions will provide a lively cross-section of views.
Papers are invited on the following topics:
*Applications--Descriptions of novel approaches; interesting results;
important implementation difficulties; experimental comparison of
alternatives etc.
*Comparison and Evaluation of different uncertainty formalisms.
*Induction (Theory discovery) under uncertainty.
*Alternative uncertainty approaches.
*Relationship between uncertainty and logic.
*Uncertainty about uncertainty (Higher order approaches).
*Other uncertainty in AI issues.
Preference will be given to papers that have demonstrated their approach
in real applications. Some papers may be accepted for publication but not
presentation (except at a poster session).
Four copies of the paper (or an extended abstract) should be sent to the
arrangements chairman before 23rd. May 1986. Acceptances will be sent by the
20th. June and final (camera ready) papers must be received by 11th. July.
Proceedings will be available at the workshop.
General Chair: Program Chair: Arrangements Chair:
John Lemmer Peter Cheeseman Lawrence Carnuccio
KSC Inc. NASA-Ames Research Center RCA-Adv. Tech. Labs.
255 N. Washington St. Mail Stop 244-7 Mooretown Corp. Cntr.
Rome, NY 13440 Moffett Field, CA 94035 Route 38, Mooretown,
(315)336-0500 (415)694-6526 NJ 08057
(609)866-6428
Program Committee:
P. Cheeseman, J. Lemmer, T. Levitt, J. Pearl, M. Yousry, L. Zadeh.
------------------------------
Date: Thu 9 Jan 86 15:12:41-CST
From: CMP.LADAI@R20.UTEXAS.EDU
Subject: Conference - User-System Interfaces Workshop
USER-SYSTEM INTERFACES WORKSHOP
When: January 31 - Febuary 1, 1986
Where: Austin South Plaza Hotel, Austin, Texas
I-35 and Woodward
What: A multidisciplinary conference addressing the problem of implementing
effective communication between human and machine. The contributions
of various fields such as Artificial Intelligence and Cognitive
Psychology are considered.
Participants:
Brooks AFB SAM
Burroughs
IBM
Lockheed
MCC
Rice University
Southwest Research Institute
Texas A&M University
Texas Instruments
University of Texas
Registration: Mail to:
Before Jan. 24 - $30.00 M. Sury
Students - $15.00 Dept. T2-32, Bldg. 30E
After Jan. 24 - $40.00 Lockheed Austin Division
Students - $20.00 P.O. Box 17100
Includes lunch on Jan. 31. Austin, TX 78760
For additional info:
Ron Grissell - [512]448-5154
Manda Sury - [512]448-5314
Diana Webster - [512]448-9186
------------------------------
Date: Sun, 22 Dec 85 23:12:53 EST
From: "John F. Sowa" <sowa.yktvmv%ibm-sj.csnet@CSNET-SH.ARPA>
Subject: Conference - IFIP Conference on Knowledge and Data
IFIP
INTERNATIONAL FEDERATION FOR INFORMATION PROCESSING
ANNOUNCEMENT
TC2 WORKING CONFERENCE organized by Working Group 2.6
Knowledge and Data (DS-2)
November 3-7, 1986 in Albufeira (Algarve), Portugal
Scope: Questions of meaning are more important for the design
of a knowledge base than methods of encoding data in bits and bytes.
As database designers add more semantic information to their systems,
their conceptual schemata begin to look like AI systems of
knowledge representation. In recognizing this convergence on issues of
semantics, IFIP Working Group 2.6 is organizing a working conference
on Knowledge and Data. It will address the issues and problems
of knowledge representation from an interdisciplinary point of view.
Topics:
Design of a conceptual schema
Knowledge and data modeling
Database semantics
Natural language semantics
Expert database systems
Logic, databases, and AI
Methods of knowledge engineering
Tools and aids for knowledge acquisition
Invited speakers:
Herve Gallaire, Germany
Robert Meersman, Belgium
J. Alan Robinson, USA
Roger Schank, USA
Dana Scott, USA
An IFIP working conference is oriented towards detailed discussion of
the topics presented. Participation is by invitation, with optional
contribution of a paper that is refereed by the program committee.
Anyone who is interested in participating should send an abstract
of current research or a prospective paper to either of the
program cochairmen. Abstracts are due March 14, 1986. Complete
papers are due May 16, 1986.
General Chairman: Amilcar Sernadas, Portugal
Program cochairmen:
John F. Sowa Robert Meersman
IBM Systems Research Institute L.U.C. -- Dept. WNIF
500 Columbus Avenue Universitaire Campus
Thornwood, NY 10594 B-3610 Diepenbeek
U.S.A. Belgium
CSNET: sowa.yktvmt@ibm
------------------------------
Date: 30 Dec 85 16:17:11 EST (Mon)
From: Duke Briscoe <duke@mitre.ARPA>
Subject: Conference - 2nd Expert Systems in Government, Re-Revised Version
This is yet another revision of the notice sent out several weeks ago,
and is a revision of the revision sent out earlier today. I am sorry
for the repetition, but there have been several foul-ups in the
information being fed to me for the production of this announcement.
CALL FOR PAPERS
THE SECOND ANNUAL CONFERENCE
ON
EXPERT SYSTEMS IN GOVERNMENT
Tyson's Westpark Hotel, McLean, VA in suburban Washington, D.C.
October 20 - 24, 1986
The conference is sponsored by the IEEE Computer Society and
the Mitre Corporation in cooperation with AIAA/NCS.
The objective of the conference is to explore the following:
- knowledge based applications and supporting technologies
- implementation and impact of emerging application areas
- future trends in available systems and required research
Classified and unclassified papers which relate to the use of
knowledge based systems are solicited. The topics of interest
include, but are not limited to, the following applications:
Professional: engineering, finance, law, management, medicine
Office Automation: text understanding, intelligent DBMS, intelli-
gent systems
Command & Control: intelligence analysis, planning, targeting,
communications, air traffic control, battle management
Exploration: outer space, prospecting, archaeology
Weapon Systems: adaptive control, electronic warfare, Star Wars,
target identification
Equipment: CAD/CAM, design monitoring, maintenance, repair
Software: automatic programming, maintenance, verification and
validation
Architecture: distributed knowledge based systems, parallel com-
puting
Project Management: planning, scheduling, control
Education: concept formation, tutoring, testing, diagnosis
Imagery: photo interpretation, mapping
Systems Engineering: requirements, preliminary design, critical
design, testing, quality assurance
Tools and Techniques: PROLOG, knowledge acquisition and represen-
tation, uncertainty management
Plant and Factory Automation
Space Station Systems
Human-Machine Interface
Speech and Natural Language
The program will consist of submitted and invited papers, which
will provide an overview of selected areas. Contributed papers
should be consistent with the following outline:
1. Introduction- state clearly the purpose of the work
2. Description of the actual work- must be new and significant
3. Results- discuss their significance
4. References
Completed papers are to be no longer than 20 pages, including
graphics. For classified papers, please submit a one page un-
classified abstract. All classified papers must be releasable at
the Secret level or below, and must be pre-approved by the
author's cognizant security release authority. Papers to be
presented by non-US citizens must be cleared through proper
government to government channels. Four copies of the complete
paper are to be submitted to:
Dr. Kamal Karna, Conference Chairman
IEEE Computer Society
1730 Massachusetts Ave., NW
Washington, D.C. 20036-1903
Author's Schedule:
Four copies of manuscript May 1, 1986
Acceptance letter June 15, 1986
Camera-ready copy July 15, 1986
Conference Chairman:
Dr. Kamal Karna
Washington AI Center
Mitre Corporation
Program Committee:
Co-chairman: Classified
Mr. Richard Martin
Associate Director, Government Programs
Software Engineering Institute
Carnegie Mellon University
Co-chairman: Unclassified
Dr. Kamran Parsaye
President
Intelliware, Inc.
------------------------------
End of AIList Digest
********************
∂15-Jan-86 1300 LAWS@SRI-AI.ARPA AIList Digest V4 #6
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Jan 86 13:00:29 PST
Date: Wed 15 Jan 1986 10:03-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #6
To: AIList@SRI-AI
AIList Digest Wednesday, 15 Jan 1986 Volume 4 : Issue 6
Today's Topics:
Description - European Association for Theoretical Computer Science
----------------------------------------------------------------------
Date: 6 JAN 86 10:55-N
From: ROZENBER%HLERUL5.BITNET@WISCVM.WISC.EDU
Subject: Description - European Assoc. for Theoretical Computer Science
[Forwarded from the SRI bboard by Laws@SRI-AI.]
Dear collegue,
I am taking advantage of this excellent communication
medium, the "Theory Net", to send you information (actually
the information leaflet) about the EUROPEAN ASSOCIATION FOR
THEORETICAL COMPUTER SCIENCE (EATCS). Although our associa-
tion is based in Europe, its membership is "intercontinental"
- about 40% of our members comes from outside Europe.
In our experience the only reason that a computer
scientist who is either actively engaged or interested in
theoretical computer science is not a member of EATCS is
that she/he does not know about our organisation - just
see how much we offer for so little!!! [...]
If you have any questions do not hesitate to contact
either myself (electronic address: ROZENBER@HLERUL5.BITNET)
or the secretary of the association Th. Ottmann (electronic
address: OTTMANN@GERMANY.CSNET).
I take this opportunity to wish you the very best
New Year.
G. Rozenberg
EATCS President
===============================================================================
EUROPEAN ASSOCIATION FOR THEORETICAL COMPUTER SCIENCE (EATCS)
COUNCIL OF EATCS
BOARD
President: G. Rozenberg, Leiden
Vice President: W. Brauer, Munich
Treasurer: J. Paredaens, Antwerp
Secretary: Th. Ottmann, Karlsruhe
Bulletin Editor: G. Rozenberg, Leiden
TCS Editor: M. Nivat, Paris
Past Presidents: M. Nivat, Paris (1972-1977)
M. Paterson, Warwick (1977-1979)
A. Salomaa, Turku (1979-1985)
FURTHER COUNCIL MEMBERS
G. Ausiello Rome
J. De Bakker Amsterdam
J. Diaz Barcelona
F. Gecseg Szeged
J. Gruska Bratislava
Z. Manna Rehovot & Stanford
H. Maurer Graz
Ch.H. Papadimitriou Athens & Stanford
A. Paz Haifa
D. Perrin Paris
E. Schmidt Aarhus
D. Wood Waterloo
EATCS
HISTORY AND ORGANISATION
EATCS is an international organisation founded in 1972. Its aim is to
facilitate the exchange of ideas and results among theoretical computer
scientists as well as to stimulate cooperation between the theoretical
and the practical community in computer science.
Its activities are coordinated by the Council of EATCS, out of which a
President, a Vice President, a Treasurer and a Secretary are elected.
Policy guidelines are determined by the Council and the General Assembly
of EATCS. This assembly is scheduled to take place during the annual
International Colloquium on Automata, Languages and Programming (ICALP),
the conference of EATCS.
MAJOR ACTIVITIES OF EATCS
- Organization of ICALP's
- Publication of the "Bulletin of the EATCS"
- Publication of the "EATCS Monographs in Theoretical Computer Science"
- Publication of the journal "Theoretical Computer Science"
- Other activities of EATCS include the sponsorship of various more
specialized meetings in theoretical computer science. Among such
meetings are: CAAP (Colloquium on Trees in Algebra and Programming),
TAPSOFT (Conference on Theory and Practice of Software Development),
STACS (Symposium on Theoretical Aspects of Computer Science),
Workshop on Graph Theoretic Concepts in Computer Science, European
Workshop on Applications and Theory of Petri Nets, Workshop on Graph
Grammars and their Applications in Computer Science.
BENEFITS
Benefits offered by EATCS include:
- Receiving the "Bulletin of the EATCS" (about 600 pages per year)
- Reduced registration fees at various conferences
- Reciprocity agreements with other organisations
- 25% discount in purchasing ICALP proceedings
- 25% discount in purchasing books from "EATCS Monographs on Theoretical
Computer Science"
- About 70% (equals about 1000 Dutch guilders) discount per annual
subscription to "Theoretical Computer Science".
(1) THE ICALP CONFERENCE
ICALP is an international conference covering all aspects of theoretical
computer science and now customarily taking place during the third week of
July.
Typical topics discussed during recent ICALP conferences are: computability,
automata theory, formal language theory, analysis of algorithms, computa-
tional complexity, mathematical aspects of programming language definition,
logic and semantics of programming languages, foundations of logic programming,
theorem proving, software specification, computational geometry, data types and
data structures, theory of data bases and knowledge based systems, cryptography,
VLSI structures, parallel and distributed computing, models of concurrency
and robotics.
Sites of ICALP meetings:
- Paris, France (1972) - Haifa, Israel (1981)
- Saarbrucken, Germany (1974) - Aarhus, Denmark (1982)
- Edinburgh, Great Britain (1976) - Barcelona, Spain (1983)
- Turku, Finland (1977) - Antwerp, Belgium (1984)
- Udine, Italy (1978) - Nafplion, Greece (1985)
- Graz, Austria (1979) - Rennes, France (1986)
- Noordwijkerhout, Holland (1980) - Karlsruhe, Germany (1987)
(2) THE BULLETIN OF THE EATCS
Three issues of the Bulletin are published annually appearing in
February, June and October respectively. The Bulletin is a medium for
rapid publication and wide distribution of material such as:
- EATCS matters
- Information about the current ICALP
- Technical contributions
- Surveys and tutorials
- Reports on conferences
- Calendar of events
- Reports on computer science departments and institutes
- Listings of technical reports and publications
- Book reviews
- Open problems and solutions
- Abstracts of Ph.D. Theses
- Information on visitors at various institutions
- Entertaining contributions and pictures related to computer science.
Contributions to any of the above areas are solicited. All written
contributions should be sent to the Bulletin Editor:
Prof.dr. G. Rozenberg
Dept. of Mathematics and Computer Science
University of Leiden
P.O. Box 9512
2300 RA Leiden, The Netherlands
Deadlines for submissions to reach the Bulletin Editor are: January 15,
May 15 and September 15 for the February, June and October issue respec-
tively.
All pictures (preferably black and white) including text of what they
are showing should be sent to the Picture Editor:
Dr. P. van Emde-Boas
University of Amsterdam
Roeterstraat 15
1018 WB Amsterdam, The Netherlands
Deadlines are 2 weeks before those for written contributions, indicated
above.
(3) EATCS MONOGRAPHS ON THEORETICAL COMPUTER SCIENCE
This is a series of monographs published by Springer-Verlag and launched
during ICALP 1984; within the first year six volumes appeared. The series
includes monographs as well as innovative textbooks in all areas of theo-
retical computer science, such as the areas listed above in connection
with the ICALP conference. The volumes are hard-cover and ordinarily
produced by type-setting. To ensure attractive prices other production
methods are possible.
The editors of the series are W. Brauer (Munich), G. Rozenberg (Leiden),
and A. Salomaa (Turku). Potential authors should contact one of the editors.
The advisory board consists of G. Ausiello (Rome), S. Even (Haifa), M. Nivat
(Paris), C. Papadimitriou (Athens & Stanford), A. Rosenberg (Durham), and
D. Scott (Pittsburgh).
Updated information about the series can be obtained from the publisher,
Springer-Verlag.
(4) THEORETICAL COMPUTER SCIENCE
The aim of the "Theoretical Computer Science" journal is to publish
papers in the fast envolving field of theoretical computer science.
The volume of research on theoretical aspects of computer science
has increased enormously in the past. The classical theories of
automata and formal languages still offer problems and results,
while considerable attention is now being given to newer areas, such
as the formal semantics of programming languages and the study of algorithms
and their complexity. Behind all this lie the major problems of under-
standing the nature of computation and its relation to computing
methodology. While "Theoretical Computer Science" remains mathematical
and abstract in spirit, it derives its motivation from the problems of
practical computation. The editors intend that the domain covered
by "Theoretical Computer Science" will increase and evolve with the
growth of the science itself. The editor-in-chief of "Theoretical
Computer Science" is:
Prof. M. Nivat
162, Boulevard Malesherbes
75017 Paris, France.
ADDITIONAL INFORMATION
Please contact the Secretary of EATCS:
Prof.dr. Th. Ottmann
Institut fur Angewandte Informatik und Formale
Beschreibungsverfahren
Universitat Karlsruhe
Postfach 6380
D-7500 Karlsruhe 1
West Germany
DUES
The dues are US $ 10.- for a period of one year. If the initial
membership payment is received in the period December 21 - April 20,
April 21 - August 20, August 21 - December 20, then the first
membership year will start on June 1, October 1, February 1,
respectively. Every continuation payment continues the membership
for the same time period.
An additional fee is required for ensuring the air mail delivery
of the EATCS Bulletin outside Europe. The amounts are $ 7.- for USA,
Canada, Israel, $ 10.- for Japan and $ 12.- for Australia per year.
For information additonal fees for other destinations contact either
the Secretary or the Treasurer.
HOW TO JOIN EATCS
To join send the annual dues, or a multiple thereof (to cover a
number of years), to the Treasurer of EATCS:
Prof.dr. J. Paredaens
University of Antwerp, U.I.A.
Department of Mathematics
Universiteitsplein 1
B-2610 Wilrijk, Belgium
The dues can be paid (in order of preference) by US $ bank cheques,
other currency bank cheques, US $ cash, other currency cash. It cannot
be paid by International Post Money Order. When submitting payment,
please make sure to indicate complete name and address. For this purpose
you may want to use the form below. You may also pay the membership fee
via the following account:
General Bank Antwerp
Antwerp, Belgium
Account number: 220-0596350-30
If a transfer is in US $ then the annual membership payment equals
US $ 10.-. If a transfer (covering the membership for any number of years
and/or addtitional air mail delivery for any number of years) is in a
currency other than US $, then additional US $ 2.- for the transfer must
be paid (the difference is used to cover the bank charges). Please remember
to indicate your address clearly (since the Bulletin is send to the address
you give).
------------------------------
End of AIList Digest
********************
∂15-Jan-86 1525 LAWS@SRI-AI.ARPA AIList Digest V4 #7
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Jan 86 15:25:04 PST
Date: Wed 15 Jan 1986 10:06-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #7
To: AIList@SRI-AI
AIList Digest Wednesday, 15 Jan 1986 Volume 4 : Issue 7
Today's Topics:
Seminars - Reasoning About Hard Objects (BBN) &
LOGIN: A Logic Programming Language with Inheritance (MIT) &
Temporal Reasoning and Default Logics (SU) &
LISP/Prolog Memory Performance (Ames)
----------------------------------------------------------------------
Date: 9 Dec 1985 12:10-EST
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Reasoning About Hard Objects (BBN)
[Forwarded from the MIT bboard by SASW@MIT-MC.]
BBN Laboratories
Science Development Program
AI Seminars
Speaker: Ernest Davis
NYU
Title: Issues in Reasoning about Hard Objects
Date: Monday, December 16th, 10:30a.m.
Place: BBN Labs, 10 Moulton Street, 3rd floor large conference room
Abstract
The physics of rigid solid objects raises two serious problems which have not
been addressed in previous spatial and physical reasoning programs. Firstly,
the physical properties of solid objects are sensitive to very slight
variations in shapes. Therefore, when an ideal shape is used to
approximate a real shape, the accuracy of the approximation must be
tightly bounded. Secondly, the method of reasoning used by both Forbus
and DeKleer of going from one critical point to the next is not, in
general, appropriate. Frequently, as in reasoning about a ball going
down a funnel, one is interested only in the final outcome (the ball
goes out the funnel) and not in any of the intermediate critical points
(collisions between the ball and the funnel). However, it is difficult
to state axioms that assert global relationships of this sort in a way
that allows them to be used in cases where additional objects enter the
picture.
------------------------------
Date: Thu 9 Jan 86 13:37:32-EST
From: Susan Hardy <SH@XX.LCS.MIT.EDU>
Subject: Seminar - LOGIN: A Logic Programming Language with Inheritance (MIT)
[Forwarded from the MIT bboard by SASW@MC.LCS.MIT.EDU.]
DATE: Thursday, January 16, 1986
TIME: 3:00 p.m. - Refreshments
3:15 p.m. - Lecture
PLACE: NE43-512A
LOGIN:
A LOGIC PROGRAMMING LANGUAGE
WITH BUILT-IN INHERITANCE
Hassan Ait-Kaci
A.I. Program
MCC, Austin, Texas
Since the early days of research in Automated Deduction, inheritance
has been proposed as a means to capture a special kind of information;
viz., taxonomic information. For example, when it is asserted that
"whales are mammals", we understand that whatever properties mammals
possess should also hold for whales. Naturally, this meaning of
inheritance can be well captured in logic by the semantics of logical
implication. However, this is not operationally satisfactory.
Indeed, in a first-order logic deduction system realizing inheritance
as implication, inheritance from "mammal" to "whale" is achieved by an
inference step. But this special kind of information somehow does not
seem to be meant as a deduction step---thus lengthening proofs.
Rather, its purpose seems to be to accelerate, or focus, a deduction
process---thus shortening proofs.
In this talk, I shall argue that the syntax and operational
interpretation of first-order terms can be extended to accommodate for
taxonomic ordering relations between constructor symbols. As a
result, I shall propose a simple and efficient paradigm of unification
which allows the separation of (multiple) inheritance from the logical
inference machinery of Prolog. This yields more efficient
computations and enhanced language expressiveness. The language thus
obtained, called LOGIN, subsumes Prolog, in the sense that
conventional Prolog programs are equally well executed by LOGIN.
I shall start with motivational examples, introducing the flavor of
what I believe to be a more expressive and efficient way of using
taxonomic information, as opposed to straight Prolog. Then, I shall
give a quick formal summary of how first-order terms may be extended
to embody taxonomic information as record-like type structures,
together with an efficient type unification algorithm. This will lead
to a technical proposal for integrating this notion of terms into the
SLD-resolution mechanism of Prolog. With examples, I shall illustrate
a LOGIN interpreter.
Host: Rishiyur Nikhil
(617)253-0237
Nikhil@mit-xx.arpa
------------------------------
Date: 13 Jan 86 1659 PST
From: Vladimir Lifschitz <VAL@SU-AI.ARPA>
Subject: Seminar - Temporal Reasoning and Default Logics (SU)
Next nonmotonic reasoning meeting:
A Review and Critique of:
"Temporal Reasoning and Default Logics"
by Steve Hanks and Drew McDermott
Yale/CSD/RR #430
October 1985
by Benjamin Grosof, inquisitioner
Thursday, January 16, 4pm
MJH 252
Hanks and McDermott in their recent Yale Tech Report pose an example
problem in temporal reasoning and claim that none of the leading
formalisms for default reasoning (namely Reiter's Default Logic,
McDermott and Doyle's modal Non-Monotonic Logic, and Circumscription)
adequately capture the type of non-monotonic reasoning that is (what
they claim is) desirable in the example. They give an algorithm which
does. They go on to conclude rather pessimistically that there seems
to be some inherent problem in the semantics of all three default
formalisms.
In this talk, I review their paper, including their temporal logic. I
argue that their example in particular is interesting and suggestive,
but that the semantical difficulty that they emphasize arises from an
underspecification of the problem. I will go on to suggest how indeed
to represent the additional CRITERION satisfied by their algorithm
(but not by their formulations in default formalisms). I show how
Vladimir's new circumscription presented in our fall sessions of the
non-monotonic reasoning seminar can solve the representational problem
they pose. I argue that circumscription, because it can incorporate
certain kinds of preferences among competing extensions via
prioritization, has an advantage over the other two default
formalisms, and promises to be able to represent the CRITERION more
generally than their algorithm does. I also discuss how their
temporal formalism occupies an intermediate place between STRIPS and
situation calculus.
------------------------------
Date: Tue, 14 Jan 86 21:47:09 pst
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Seminar - LISP/Prolog Memory Performance (Ames)
National Aeronautics and Space Administration
Ames Research Center
SEMINAR ANNOUNCEMENT
Joint Ames AI Forum/RCR Branch
SPEAKER: Evan Tick
Computer Systems Laboratory
Stanford University
TOPIC: Memory Performance of Lisp and Prolog Programs
ABSTRACT: This talk presents a comparison between Lisp and Prolog
architectures based on memory performance. A subset of the Gabriel
benchmarks was translated into Prolog, compiled into the Warren Abstract
Machine instruction set and emulated. The programs were also measured with
an instrumented Common Lisp targeted to a Series 9000/HP237. Memory usage
statistics indicate how the two langauges do fundamental computations
different ways with varying efficiency.
DATE: 28 January 1986 TIME: 1030 AM BLDG: 172 ROOM: 233
Tuesday
POINT OF CONTACT: E. Miya PHONE NUMBER: (415)-694-6453
emiya@ames-vmsb
I am current attending a conference, please send mail or contact my office
mate.
VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18. See map
below. Do not use the Navy Main Gate.
Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance. Submit requests to the point of
contact indicated above. Non-citizens must register at the Visitor
Reception Building. Permanent Residents are required to show Alien
Registration Card at the time of registration.
------------------------------
End of AIList Digest
********************
∂15-Jan-86 1819 LAWS@SRI-AI.ARPA AIList Digest V4 #8
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Jan 86 18:18:54 PST
Date: Wed 15 Jan 1986 10:12-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #8
To: AIList@SRI-AI
AIList Digest Wednesday, 15 Jan 1986 Volume 4 : Issue 8
Today's Topics:
Queries - Macsyma & Symbolics Prolog & Speech Learning Machine,
Definition - Paradigm,
Intelligence - Computer IQ Tests,
AI Tools & Applications - Expert Systems and Computer Graphics &
Common Lisp for Xerox & Real-Time Process Control
----------------------------------------------------------------------
Date: 13 Jan 86 23:16 GMT
From: dkb-amos @ HAWAII-EMH.ARPA
Subject: Macsyma
I would appreciate any help that could be supplied in locating a
source for Macsyma.
I'm looking for a version that will run under Franzlisp Opus 38.91.
We do contract work for the Air Force but I have no immediate
contract application for this package, I would just like to get
familier with it and have it around for possible future applications.
Thanks.
-- Dennis Biringer
------------------------------
Date: Mon 13 Jan 86 16:07:10-PST
From: Luis Jenkins <lej@SRI-KL>
Subject: Symbolics Prolog
[Sorry if this topic has been beaten to death before many time ...]
Here at Schlumberger Palo Alto Research (SPAR) we have been working
for some time on large Prolog programs for Hardware Verification,
first in Dec-20 Prolog and then in Quintus Prolog for Suns.
Recently we have been interested in the possibility of using Symbolics
Prolog for further R&D work, as the lab has a bunch of LispMs.
Does anyone out there has first-hand (or n-hand, please specify)
experience with the Prolog that Symbolics offers. Specifically, we
want to hear praises/complaints about :-
o DEC-10/Quintus Compatibility
o Speed
o Bugs
o Extensions
o Interface with the LispM environment
o Mixing Prolog & Lisp code
o Random User Comments
Thanks,
Luis Jenkins
Schlumberger Palo Alto Research
lej@sri-kl
...decwrl!spar!lej
------------------------------
Date: 13 Jan 86 11:22:01 EST
From: kyle.wbst@Xerox.ARPA
Subject: Johns Hopkins Learning Machine
Does anyone have any more info on the following:
I caught the tail end of a news item on the NBC Today Show this morning
about someone at Johns Hopkins who has built a "Networked" computer
consisting of 300 "elements" that has a speech synthesizer attached to
it. The investigator claims that the thing learns to speak English the
same way a human baby does. They played a tape recording which
represented a condensation of several hours of "learning" by the device.
The investigator claims he does not know how the the thing works. I
didn't catch his name.
Who is this person and what is the system configuration of the machine
(which seemed to fit into one large rack of equipment).
Earle Kyle
------------------------------
Date: Tue, 14 Jan 86 09:34:53 est
From: Walter Hamscher <hamscher@MIT-HTVAX.ARPA>
Subject: Today Show Segment
A friend of mine saw the Today Show this Monday morning,
and said there was a particularly breathless segment that
left the impression that somebody has solved `the AI
problem'. It seems to have been a rather vague story
about someone at Johns Hopkins who has built some sort of
massively parallel machine that learns language.
Sorry the details are so sketchy. Did anybody else
see this segment or know the story behind the story?
------------------------------
Date: 14 Jan 86 22:05:47 EST
From: Mike Tanner @ Ohio State <TANNER@RED.RUTGERS.EDU>
Subject: Paradigm
I've seen some discussion of paradigm in recent AILists and since I
just audited a grad course in philosophy of science where we read Kuhn
I thought I'd summarize what I remember of Kuhn's notion of paradigm.
(Auditing a course certainly does not make me an expert, but it does
mean that I've read Kuhn recently and carefully.)
Several people have pointed out that the dictionary definition (e.g.,
Webster's 3rd New International) of `paradigm' is `example',
`pattern', or `model'. But they further claim that this is not what
Kuhn meant. However, I think that the way `paradigm' is used by Kuhn
is (most of the time) perfectly compatible with the dictionary.
In ←The←Structure←of←Scientific←Revolutions← Kuhn normally uses
`paradigm' to mean `example of theory applied' or `example of how to
do science'. (Sometimes he uses it to mean `theory', which is
confusing and I think he later admits that it is just sloppiness on
his part.) Ron Laymon, our prof in the philosophy of science course,
suggested that it might be best to think of paradigm as `an
uninterpreted book'. Everybody working in some field points to a book
when asked what they do and says, "There, read that book and you'll
know." Of course, once the book is opened there's likely to be a lot
of disagreement about what it means.
Another important characteristic of paradigms is that they suggest a
lot of further research. If I were a cynical person I would say that
the success of a paradigm depends on people's perceptions of funding
prospects for research of the sort that it defines.
I'm not sure that AI is mature enough to rate any paradigms. But I
think that a case could be made for some things as "mini-paradigms",
such as GPS, MYCIN, Minsky's frame paper, etc. That is, they defined
some sub-discipline within AI where a lot of people did, and are
doing, fruitful work. (I don't mean "mini" to be pejorative. I just
think that a paradigm has to be a candidate for unifying research in
the field, or maybe even defining the field, and these probably don't
qualify. But then, I might be expecting too much of paradigms.)
-- mike
ARPA: tanner@Rutgers
CSNet: tanner@Ohio-State
Physically, I am at Ohio State but I have a virtual existence at
Rutgers and can receive mail either place.
------------------------------
Date: Wed 15 Jan 86 09:42:58-CST
From: David Throop <AI.THROOP@R20.UTEXAS.EDU>
Subject: Computers & IQ Tests
There have been recent inquiries about how well computer programs can do on
IQ tests.
An article in the journal ←Telicom← (1) mentions a computer program for
taking IQ tests. It seems to be aimed entirely at the kinds of math
puzzles that fill in missing numbers in series.
"The program (is) called HIQ-SOLVER 160 ... BASIC, less than 10 Kbytes...
in July/August Dutch computer magazine ←Sinclair←Gebruiker← has the
listing... The program has been tried on the numerical test in Hans
Eysenck's ←Check←Your←Own←IQ← and it solved 36 out of 50 problems,
corresponding with an IQ of about 160 (hence its name); as some items in
the Eysenck test were of a type that had not been implemented one might
argue that the program's raw score corresponds with an even higher IQ ..."
He goes on to give the algorithm.
I think this example highlights an example of the difficulty of applying
human IQ tests to machines - the program scores very high on certain IQ
tests because it does a very limited kind of pattern recognition very well.
But it is completely brittle - it's helpless to recognize patterns that are
only slightly off what it expects.
Human intelligence tests do not measure human intelligence directly.
They measure characteristics associated with intelligence. The underlying
assumption is that this association is good enough that it will predict how
well humans will do on tasks that cannot be given as standard tests, but
evince intelligence.
This is a dubious proposition for humans, but it breaks down completely
on machines. Nonetheless, it shouldn't be too hard to CONS up some
programs that do terribly well on some not too terribly well designed IQ
tests.
(1)Feenstra, Marcel "Numerical IQ - Tests and Intelligence" Telicom, Aug
85, Bx 141 San Francisco 94101
------------------------------
Date: Sun 12 Jan 86 18:05:49-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems and Computer Graphics
IEEE Computer Graphics and Applications, December 1985, pp. 58-59,
has a review by Ware Myers of the 6th Eurographics conference.
The key theme was integrating expert systems and computer graphics.
Several of the papers discussed binding Prolog and the GKS
graphical kernel standard.
------------------------------
Date: Sun 12 Jan 86 17:28:34-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Common Lisp for Xerox
Expert Systems, Vol. 2, No. 4, October 1985, p. 252, reports that
Xerox will be implementing Common Lisp on its Lisp workstations.
The first copies may be available in the second quarter of 1986.
Xerox will continue to support Interlisp-D, and will be adding
extensions and compatable features to both languages. A package
for converting Interlisp-D programs to Common Lisp is being
developed.
Guy Steele said (Common Lisp, p. 3) that it is expected that user-
level packages such as InterLisp would be built on top of the Common
Lisp core. Perhaps that is now happening. Xerox is also offering
CommonLoops as a proposed standard for object-oriented programming.
------------------------------
Date: Sun 12 Jan 86 18:00:24-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Real-Time Process Control
IEEE Spectrum, January 1986, p. 64, reports the following:
The building of engineering expertise into single-loop controllers
is beginning to bear fruit in the form of a self-tuning process
controller. The Foxboro Co. in Foxboro, Mass., included self-tuning
features in its Model 760 single-loop controller as well as in
three other controller-based products. Common PID (proportional,
integral, and derivative) controllers made by Foxboro now have a
built-in microprocessor with some 200 production rules; the loop-tuning
rules have evolved over the last 40 years both at Foxboro and
elsewhere. The Foxboro self-tuning method is a pattern recognition
approach that allows the user to specify desirable temporal
response to disturbances in the controlled parameter or in the
controlled set point. The controller then observes the actual
shape of these disturbances and adjusts its PID values to restore
the desirable response.
Asea also makes a self-tuning controller, Novatune, but the current
version requires substantial knowledge of stochastic control theory
to install.
Lisp Machine Inc. has now installed PICON, its expert system for
real-time process control, at about a half-dozen sites. It has also
announced support for GM's MAP communication protocol for factory
automation.
------------------------------
End of AIList Digest
********************
∂20-Jan-86 1619 LAWS@SRI-AI.ARPA AIList Digest V4 #10
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 20 Jan 86 16:18:56 PST
Date: Mon 20 Jan 1986 13:36-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #10
To: AIList@SRI-AI
AIList Digest Monday, 20 Jan 1986 Volume 4 : Issue 10
Today's Topics:
Machine Learning - Connectionist Speech Machine
----------------------------------------------------------------------
Date: Wed, 15 Jan 86 23:06 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: nettalk
Several people inquired about the work of Terrence Sejnowski (of Johns
Hopkins) which was reported on the Today show recently. This abstract is to
a talk given by Sejnowski here at Penn in October '85:
NETTALK: TEACHING A MASSIVELY-PARALLEL NETWORK TO TALK
TERRENCE J. SEJNOWSKI
BIOPHYSICS DEPARTMENT
JOHNS HOPKINS UNIVERSITY
BALTIMORE, MARYLAND
Text to speech is a difficult problem for rule-based systems because English
pronunciation is highly context dependent and there are many exceptions to
phonological rules. A more suitable knowledge representation for
correspondences between letters and phonemes will be described in which rules
and exceptions are treated uniformly and can be determined with a learning
algorithm. The architecture is a layered network of several hundred simple
processing units with several thousand weights on the connections between the
units. The training corpus is continuous informal speech transcribed from tape
recordings. Following training on 1000 words from this corpus the network can
generalize to novel text. Even though this network was not designed to mimic
human learning, the development of the network in some respects resembles the
early stages in human language acquisition. It is conjectured that the
parallel architecture and learning algorithm will also be effective on other
problems which depend on evidential reasoning from previous experience.
(No - I don't have his net address. Tim.)
------------------------------
Date: 16 Jan 86 1225 PST
From: Richard Vistnes <RV@SU-AI.ARPA>
Subject: John Hopkins learning machine: info
See AIList Digest V3 #183 (10 Dec 1985) for a talk given at Stanford
a little while ago that sounds very similar. The person is:
Terrence J. Sejnowski
Biophysics Department
Johns Hopkins University
Baltimore, MD 21218
(I didn't attend the talk).
-Richard Vistnes
------------------------------
Date: Sun, 19 Jan 86 0:19:10 EST
From: Terry Sejnowski <terry@hopkins-eecs-bravo.ARPA>
Subject: Reply to Inquiries
NBC ran a short segment last Monday, January 13, on the
Today Show about my research on a connectionist model of text-to-speech.
The segment was meant for a general audience (waking up)
and all the details were left out, so here is an abstract for
those who have asked for more information. A technical report is
available (Johns Hopkins Electrical Engineering and Computer Science
Technical Report EECS-8601) upon request.
NETtalk: A Parallel Network that Learns to Read Aloud
Terrence Sejnowski
Department of Biophysics
Johns Hopkins University
Baltimore, MD 21218
Charles R. Rosenberg
Department of Psychology
Princeton Unviversity
Princeton, NJ 08540
Unrestricted English text can be converted to speech by applying
phonological rules and handling exceptions with a look-up table.
However, this approach is highly labor intensive since each entry
and rule must be hand-crafted. NETtalk is an alternative approach
that is based on an automated learning procedure for a parallel
network of deterministic processing units. After training on a
corpus of informal continuous speech, it achieves good performance
and generalizes to novel words. The distributed representations
discovered by the network are damage resistant and recovery from
damage is about ten times faster than the original learning
starting from the same level of performance.
Terry Sejnowski
------------------------------
Date: Thu, 16 Jan 86 12:53 EST
From: Mark Beutnagel <Beutnagel%upenn.csnet@CSNET-RELAY.ARPA>
Subject: speech learning machine
The speech learning machine referred to in a recent AIList is almost
certainly a connection machine built by Terry Sejnowski. The system
consists of a maybe 200 processing elements (or simulations of such)
and weighted connections between them. Input is a small window of
text (5 letters?) and output is phonemes. The system learns (i.e.
modifies weights) based on a comparison of the predicted phoneme with
the "correct" phoneme. After running overnight the output was
recognizable speech--good but still slightly mechanical. Neat stuff
but nothing mystical.
-- Mark Beutnagel (The above is my recollection of Terry's talk here
at UPenn last fall so don't quote me.)
------------------------------
Date: Sun 19 Jan 86 12:31:31-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Speech Learning
I'll have a try at summarizing Terry's talk at Stanford/CSLI:
The speech learning machine is a three-layer "perceptron-like"
network. The bottom layer of 189 "processing units" simply encodes a
7-character window of input text: each character (or space) activates
one of 27 output lines and suppresses 26 other lines.
The top, or output, layer represents a "coarse coding" of the phoneme
(or silence) which should be output for the character at the center
of the 7-character window. Each bit, or output line, of the top layer
represents some phoneme characteristic: vowel/consonant, voiced,
fricative, etc. Each legal phoneme is thus represented by a particular
output pattern, but some output patterns might not correspond to legal
phonemes. (I think they were mapped to silence in the recording.)
The output was used for two purposes: to compute a feedback error signal
used in training the machine, and to feed the output stage of a DecTalk
speech synthesizer so that the output could be judged subjectively.
The heart of the system is a "hidden layer" of about 200 processing
units, together with several thousand interconnections and their weights.
These connect the 189 first-level outputs to the small number of output
processing units. It is the setting of the weight coefficients for this
network that is the central problem.
Input to the system was a page of a child's speech that had be transcribed
in phonetic notation by a professional. Correspondence had been established
between each input letter and the corresponding phoneme (or silence), and
the coarse coding of the phonemes was known. For any possible output of the
machine it was thus possible to determine which bits were correct and which
were incorrect. This provided the error signal.
Unlike the Boltzmann Machine or the Hopfield networks, Sejnowski's algorithm
does not require symmetric excitory/inhibitory connections between the
processing units -- the output computation is strictly feed-forward.
Neither did this project require simulated annealing, although some form
of stochastic training or of "inverse training" on wrong inputs might be
helpful in avoiding local minima in the weight space.
What makes this algorithm work, and what makes it different from multilayer
perceptrons, is that the processing nodes do not perform a threshold
binarization. Instead, the output of each unit is a sigmoid function of
the weighted sum of its inputs. The sigmoid function, an inverse
exponential, is essentially the same one used in the Boltzmann Machine's
stochastic annealing; it also resembles the response curve of neurons.
Its advantage over a threshold function is that it is differentiable.
This permits the error signal to be propagated back through each
processing unit so that appropriate "blame" can be attributed to each
of the hidden units and to each of the connections feeding the hidden
units. The back-propagated error signals are exactly the partial
derivatives needed for steepest-descent optimization of the network.
Subjective results: The output of the system for the page of text was
originally just a few random phonemes with no information content. After
sufficient training on the correct outputs the machine learned to "babble"
with alternating vowels or vowel/consonants. After further training it
discovered word divisions and then began to be intelligible. It could
eventually read the page quite well, with a distinctly childish accent
but with mechanical pacing of the phonemes. It was then presented with
a second page of text and was able to read that quite well also.
I have seen some papers by Sejnowski, Kienker, Hinton, Schumacher,
Rumelhart, and Williams exploring variations of this machine learning
architecture. Most of the work has concerned very simple, but
difficult, problems, such as learning to compute exclusive OR or the
sum of two two-bit numbers. More complex tasks involved detecting
symmetries in binary matrices and computing figure/ground (or
segmentation) relationships in noisy images with an associated focus
of attention. I find the work promising and even exciting.
-- Ken Laws
------------------------------
End of AIList Digest
********************
∂20-Jan-86 1828 LAWS@SRI-AI.ARPA AIList Digest V4 #9
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 20 Jan 86 18:28:33 PST
Date: Mon 20 Jan 1986 13:26-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #9
To: AIList@SRI-AI
AIList Digest Monday, 20 Jan 1986 Volume 4 : Issue 9
Today's Topics:
Queries - System V Franz & OPS5 & Address for Prof. Bouille &
Knowledge-Engineering Software & Supercomputers and AI &
AI and Process Control & What is a Symbol?
----------------------------------------------------------------------
Date: Wed, 15 Jan 1986 18:50 PLT
From: George Cross <FACCROSS%WSUVM1.BITNET@WISCVM.WISC.EDU>
Subject: System V Franz?
Does anyone sell or distribute a version of FranzLisp that runs under
Unix System V on a VAX? or another machine?
---- George
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
George R. Cross cross@wsu.CSNET
Computer Science Department cross%wsu@csnet-relay.ARPA
Washington State University faccross@wsuvm1.BITNET
Pullman, WA 99164-1210 (509)-335-6319/6636
Acknowledge-To: George Cross <FACCROSS@WSUVM1>
------------------------------
Date: Thu 16 Jan 86 10:42:00-PST
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: OPS5 query
I'd like to try a version of OPS5 on an IBM-PC for exploration (not
necessarily system delivery) and would like some opinions of the various
flavors I've seen advertised. A few I've noticed are TOPSI and OPS83.
Any thoughts on price, speed, portability, etc. would be welcome. I can
digest the responses and post them back to the list.
Thanx muchly.
--ted
------------------------------
Date: Fri, 17 Jan 86 11:06 IST
From: Amir Toister <J65%TAUNIVM.BITNET@WISCVM.WISC.EDU>
Subject: help
CAN ANYONE HELP ME LOCATE:
PROF. F. BOUILLE
LABORATOIRE D'INFORMATIQUE
DES SCIENCE DE LA TERRE,
UNIV. PIERRE ET MARIE CURIE.
PARIS
------------------------------
Date: Fri, 17 Jan 86 15:04:24 est
From: Tom Scott <scott%bgsu.csnet@CSNET-RELAY.ARPA>
Subject: Two questions on knowledge-engineering software
1. Rick Dukes from Symbolics recently gave an interesting talk on
AI/KE to the Northwest Ohio chapter of the ACM. He mentioned an
expert-system-building tool, MRS, from Stanford. I ran across another
reference to MRS in the Winter 1986 issue of "AI Magazine" (p. 107).
Can anyone tell me about the system? What does it do? What
representation and search techniques are available through it? Can it
handle frames? Semantic networks? Certainty factors? How does it
work as an expert-system development environment?
Most importantly, how does a university acquire MRS? I think
Rick told us that it was available to universities essentially for
free. If that is true, then where can we send a tape?
2. Several good works have been published on Prolog, e.g., Clocksin &
Mellish's "Programming in Prolog" and Lloyd's "Foundations of Logic
Programming". It appears, however, that there is no book yet on
"advanced" AI/KE programming techniques in Prolog. The Clocksin &
Mellish text is good as an introduction, the Lloyd book as a
theoretical discussion of logical foundations. A number of us would
like to see a Prolog book that covers topics similar in scope to part
II of Charniak, Riesbeck, and McDermott's "Artificial Intelligence
Programming". Charniak et al. use Lisp; who does the same with
Prolog?
One hope along these lines is an MIT Press book, "The Art of
Prolog" by Sterling and Shapiro. I first saw a reference to it in an
advertisement on p. A-22 of "Communications of the ACM" (January
1986). Has the book been published yet or is it not supposed to come
out until May? Does anyone know about it? What does it cover?
------------------------------
Date: Mon, 20 Jan 86 09:47:20 cet
From: JOHND%IDUI1.BITNET@WISCVM.WISC.EDU
Subject: Supercomputers and AI
I would like to know if anyone has any references to AI projects
being done on supercomputers. We have a class here on
supercomputers that will be using a Cray XMP/24, an Intel
Hypercube, and perhaps an MPP. I am interested in having a
student do an AI related project, and I'd like it to relate to
some current work. I am also interested in how much AI
software (languages and systems) has been transported to
these supercomputer. All references will be most appreciated.
John Dickinson
Univ. of Idaho
JOHND%IDUI1 (on BITNET)
------------------------------
Date: Mon, 20 Jan 86 9:21:58 MET
From: mcvax!delphi.UUCP!mdc@seismo.CSS.GOV
Subject: AI and process control
I am involved in a AI factory automation project.
Can you give me any reference or material on this subject?
Thanks
Maurizio De Cecco
DELPHI S.p.A.
Via Della Vetraia, 11
55049 Viareggio
Italy
[Two magazine articles are Expert Systems, Vol. 1, No. 1, July 1984, and
High Technology, May 1985. The first is a description of the CMU ISIS
scheduling system, the latter a report on factory automation. -- KIL]
------------------------------
Date: 19 Jan 86 17:12:15 EST
From: David.Plaut@K.CS.CMU.EDU
Subject: What is a symbol?
This is a request for help....
The idea of a symbol is found throughout AI and Cognitive Science, and seems
to bear considerable theoretical weight. Newell and Simon's Physical Symbol
System Hypothesis, that a machine that carries out processes operating on
symbol structures has the necessary and sufficient means for general
intelligent action, seems to be an expression of the underlying assumptions
of the majority of work in AI.
Yet it seems that no satisfactory definition/description (necessary and
sufficient characteristics) of what is meant by a symbol (sorry about the
pun) has ever been presented. The following rough description seems to be a
standard attempt:
A symbol is a formal entity whose internal structure
places no restrictions on what it may represent in the
domain of interest.
Unfortunately, when combined with the Physical Symbol System Hypothesis,
this notion of symbol creates a problem with regard to so-called
"connectionist" systems.
It is possible to design a connectionist system that exhibits, if not
"general intelligent action", certainly "knowledge-level" behavior, without
any processes operating on symbol structures. The formal, computational
processes of the system are operating below the symbol level, in terms of
the interaction of units representing non-symbolic "micro-features". A
symbol level description of the system only applies to emergent patterns of
micro-features. Unfortunately these patterns fail to qualify as symbols by
the above account due to the fact that it is precisely their internal
structure which determines what they represent. Thus we are left with a
system capable of knowledge-level behavior apparently without symbols.
It seems there are three ways out of this dilemma:
(1) deny that connectionist systems are capable, in
principle, of "true" general intelligent action;
(2) reject the Physical Symbol System Hypothesis; or
(3) refine our notion of a symbol to encompass the operation
and behavior of connectionist systems.
(1) seems difficult (but I suppose not impossible) to argue for, and since I
don't think AI is quite ready to agree to (2), I'm hoping for help with (3)
- Any suggestions?
David Plaut
(dcp@k.cs.cmu.edu)
------------------------------
End of AIList Digest
********************
∂22-Jan-86 1323 LAWS@SRI-AI.ARPA AIList Digest V4 #11
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 Jan 86 13:23:38 PST
Date: Wed 22 Jan 1986 10:07-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #11
To: AIList@SRI-AI
AIList Digest Wednesday, 22 Jan 1986 Volume 4 : Issue 11
Today's Topics:
Query - LISP Language Standard,
Correction - Spang Robinson Report on Reasoning Systems,
AI Tools - AI and Supercomputers & MRS,
Definitions - Paradigm & Symbol,
Expert Systems & AI in the Media - Connectionist Speech Learning &
Arthur Young's System for Financial Auditing
----------------------------------------------------------------------
Date: 21 Jan 86 01:18:00 PST
From: sea.wolfgang@ames-vmsb.ARPA
Reply-to: sea.wolfgang@ames-vmsb.ARPA
Subject: LISP Language Standard
I am currently involved in the definition of some loose LISP
programming standards [loose LISP sink ships], has anyone given any
thought to this, particularly as it applies to LISP environments,
or does anyone know of any articles on the topic?.
I will be happy to collect responses and send them back out on the
List.
Thank you,
S. Engle, Informatics General Co.
NASA/Ames Research Center MS 242-4
Moffet Field, CA 95035
SEA.WOLFGANG@AMES-VMSB.ARPA
------------------------------
Date: Wed, 15 Jan 86 04:18:25 cst
From: Laurence Leff <leff%smu.csnet@CSNET-RELAY.ARPA>
Subject: Correction
[Joseph Rockmore, vice president of Reasoning Systems, says that the
Spang Robinson report on his company's agreement with Lockheed was
correct, but that the summary in AIList incorrectly identified his
company's work with "USC Kestrel Institute". He points out that
Reasoning Systems is associated with Kestrel, but that neither is
associated with USC-ISI. Laurence Leff has provided the following
additional summary in the course of resolving this matter. Contact
rockmore@kestrel.ARPA for further information. -- KIL]
In my abstracts of Spang Robinson Report, I reported parenthetically
that Reasoning Systems is commercializing the work of [...] Kestrel Institute.
That parenthetical statement was based on my own analysis of the
situation and was not included in the Spang Robinson report. My apologies
for any confusion created.
Its was based on what I perceived to be a similarity between the work
and the fact that one person has moved from that organization over to
Reasoning Systems (as indicated in the address of authors section of
IEEE Transactions on software Engineering). Also, quoting from
"Software Environments at Kestrel Institute" in the November 1985
Volume Se-11 No 11, "One of the authors (G. B. Kotik) is currently with
Reasoning Systems, a company founded in 1984 in order to apply the body
of basic research in knowledge-based programing to commercial problems.
Reasoning Systems develops special-purpose knowledge-based program
generators and programming environments for various domains." and
later in the same article "Toward these ends, Reasoning Systems has
developed a system called REFINE," "Although REFINE derives its
inspiration from many sources, it utilizes the principles and system
structure laid out in the CHI project."
------------------------------
Date: Tue 21 Jan 86 13:52:27-CST
From: CMP.BARC@R20.UTEXAS.EDU
Subject: AI and Supercomputers
On January 17, UCSD offered a one-day program, called "Capabilities and
Applications of the San Diego Supercomputer Center", in conjunction with
the opening of their new center. One of the talks was "AI and Expert Systems
on Supercomputers" by Dr. Robert Leary, a Senior Staff Scientist at the
San Diego Supercomputer Center. I didn't attend the course but heard that
Leary's talk was preliminary and did not present any significant applica-
tions. Further information can probably be obtained from SDSC on the UCSD
campus or from UCSD Extension. The address of UCSD is La Jolla, CA 92903.
Dallas Webster
CMP.BARC@R20.UTexas.Edu
------------------------------
Date: Tue, 21 Jan 86 09:52:16 est
From: Walter Hamscher <hamscher@MIT-HTVAX.ARPA>
Subject: MRS
Date: Fri, 17 Jan 86 15:04:24 est
From: Tom Scott <scott%bgsu.csnet@CSNET-RELAY.ARPA>
Subject: Two questions on knowledge-engineering software
1. Rick Dukes from Symbolics recently gave an interesting talk on
AI/KE to the Northwest Ohio chapter of the ACM. He mentioned an
expert-system-building tool, MRS, from Stanford.
* * *
Can anyone tell me about the system? What does it do?
What representation and search techniques are available through it?
It's a logic programming system written in Lisp. The principal
underlying inference engine is resolution, you can also do forward &
backward chaining. The name means `Metalevel Reasoning System'
because you can write meta-level axioms, axioms about the base level
knowledge -- usually these meta axioms are used to guide the
search-based inference procedures. I hear the latest version lets one
write meta-meta-axioms, meta-meta-meta-axioms, etc ("Anything you can
do, I can do Meta," as Brachman says).
For background see "An Overview of Meta-Level Architecture" Genesereth
AAAI-83. Stanford Heuristic Programming Project probably has some
kind of MRS manual; there's also an `MRS Dictionary' but that's really
more of a reference tool.
Can it handle frames? Semantic networks? Certainty factors?
It can `handle' anything you can write in lisp... does it provide
any of these facilities, No, I don't think so.
How does it work as an expert-system development environment?
Good question. How does Lisp work as an expert-system environment?
For applications to troubleshooting & test generation see Genesereth,
AAAI-82; Yamada, IJCAI-83; Singh's PhD thesis from Stanford (1985);
Genesereth in AI Journal V 24 #1-3 or `Qualitative Reasoning about
Physical Systems', ed. Bobrow. It's NOT a traditional expert-system
envirionment ala KEE, ART, S1, DUCK, etc.
Most importantly, how does a university acquire MRS?
Jane Hsu (HSU@SCORE) should be able to tell you all about this. I
believe she's charge of maintenance & distribution. She may refer you
on to Arthur Whitney, but try Jane first.
I think
Rick told us that it was available to universities essentially for
free. If that is true, then where can we send a tape?
For some reason the figure $500 sounds right, but don't quote me.
------------------------------
Date: Fri, 17 Jan 86 10:43:33 PST
From: kube%cogsci@BERKELEY.EDU (Paul Kube)
Subject: What's a paradigm?
A classic attempt to figure out just what the devil Kuhn means by
`paradigm' is Margaret Masterman's `The nature of a paradigm' (in
←Criticism and the Growth of Knowledge←, I. Lakatos and A. Musgrave,
eds.). She finds 21 ("possibly more, not less") senses of the term
in the first edition of ←The Structure of Scientific Revolutions←;
take your pick.
------------------------------
Date: Wed, 22 Jan 86 02:16:17 PST
From: kube%cogsci@BERKELEY.EDU (Paul Kube)
Subject: Re: What is a symbol?
>.... Newell and Simon's Physical Symbol
>System Hypothesis, that a machine that carries out processes operating on
>symbol structures has the necessary and sufficient means for general
>intelligent action, seems to be an expression of the underlying assumptions
>of the majority of work in AI.
...
> A symbol is a formal entity whose internal structure
> places no restrictions on what it may represent in the
> domain of interest.
>
>Unfortunately, when combined with the Physical Symbol System Hypothesis,
>this notion of symbol creates a problem with regard to so-called
>"connectionist" systems.
I think at least two concepts, not just one, need some work here: it
would help to have a better idea not only of what symbols are, but
also of what operating on a symbol is.
Under what one might call the Turing conception of `operating on a
symbol'-- a strong, agentive interpretation: symbols are objects that
get manipulated by a processor, e.g. written on and erased from a
tape, or shuffled from location to location--I think that it's
probably true that connectionist systems do not `operate' on symbols
that have interesting external referents. But I doubt that the
majority of workers in AI believe that in this sense `operating on
symbols' is necessary for the production of intelligent action, and so
there is no conflict with connectionism; that construal of the PSSH is
easy enough to give up. (That `operating on symbols' in the Turing
sense be sufficient for the production of intelligent action is,
however, pretty clearly an underlying assumption of work in the field;
but of course this doesn't conflict with connectionism either.)
On the other hand, a weaker interpretation of what operating on
symbols amounts to gives a PSSH that is compatible with connectionism,
not to mention being more likely to be true. Certainly what's
important about symbols for theory construction in AI is that they
have formal properties which determine their interactions with other
symbols without regard to any semantic properties they might have,
while being susceptible of being assigned semantic properties in a way
that is dependent on these interactions. (Anyway I don't think it's
helpful to require of a symbol that its `internal structure places no
restrictions on what it may represent', at least without further
specification of what counts as internal structure. Take an English
word: `symbol', say. What's between the quotes is a symbol, I'd
think, but intuitively its internal structure places pretty strong
restrictions on what it represents: try composing it of six different
letters, for example.) But then they don't need to be objects;
symbols can be states, and the formal properties which determine their
interaction (`operations' on them) can be identified with certain of
their causal properties. Now, one way a system can be in symbolic
states is to operate on symbols in the strong, Turing sense; but this
is only one way. Symbolic states can also be emergent states of a
connectionist system.
Paul Kube
Computer Science Division
U.C. Berkeley
Berkeley, CA 94720
kube@cogsci.berkeley.edu
ucbvax!kube
------------------------------
Date: Mon, 20 Jan 86 16:57:43 mst
From: ted%nmsu.csnet@CSNET-RELAY.ARPA
Subject: today show segment
I think that the work that was mentioned recently in the digest
from the today show (which I didn't see) was the speech synthesis
work which was described earlier on the aidigest (sketchily). I
don't remember the contact (sejnowski??), but the machine was a
neural analog network that modified it's own weights when given a
training corpus of textual english with correct voice synthesizer
outputs. Then, when given more english (it wasn't clear that this
new text had not appeared in the original training corpus) the
machine produced coherent control inputs for the voice
synthesizer.
Claims that ``it learns to speak the way that human babies do''
and so on are obviously bunk since people don't learn initially
to read text and because people also have to derive the
correlation between their motor stimulation (essentially the
voice synthesizer control level), the sound thereby produced and
the percepts that are returned via their ears. A measure of the
comparative difficulty is that programs which do text to speech
conversion extremely well have been existence for several years
(DECtalk is the current avatar), but no program can yet even
reproduce an infant's use of auditory language. Certainly, no-one
can be claiming that a program that can learn to do the former
must be able to consequently be able to learn to do the latter,
much less that the acquisition method that would be used is the
one used by human children.
The most interesting thing is that my original contact with the
author of the project in question (I think), is that he never
mentioned this sort of comparison.
sigh....the original work was interesting, possibly even
progressive. But then here comes the today show interviewers
looking for a BREAKTHROUGH. So they find (make) one and we hear
about another case of ai-hype. Everybody get ready for another
wave of flames.
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: News Flash
Source: January 16 Wall Street Journal FIRST Page
"CPA firm Arthur Young unveils a computer system today that uses expert
systems to help the auditor focus on areas where risk of error is greatest.
The system could mean average savings of 10% in time and money, says
Arthur Young's Robert Temkin"
------------------------------
End of AIList Digest
********************
∂22-Jan-86 1604 LAWS@SRI-AI.ARPA AIList Digest V4 #12
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 Jan 86 16:04:41 PST
Date: Wed 22 Jan 1986 13:10-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #12
To: AIList@SRI-AI
AIList Digest Thursday, 23 Jan 1986 Volume 4 : Issue 12
Today's Topics:
Natural Language - Modulated Kitchens and Superior Borders,
Humor - Pseudoscience Jargon,
Logic & Humor - Proof that P = NP,
Games - Othello Tournament Information
Literature - New Text on Natural Language Processing
----------------------------------------------------------------------
Date: 22 Dec 1985 1822-PST (Sunday)
From: Steven Tepper <greep@camelot>
Subject: modulated kitchens and superior borders
From a recent issue of the Chronicle:
"When you mount the cooker hood on a modulated kitchen,
please care that the superior border of the caliber is
on the inferior border of the incorporated board. When
you fix the cooker hood to the incorporated board, please
set this border on the wall up on the bottom of the
incorporated board and use the unhooped holes."
Instructions for fitting a stove hood made in Italy by the Zanussi
company. The Plain English Campaign in London has awarded the
directions its annual prize for the worst example of bureaucratic
language, citing an "incompetent and baffling translation from an
unknown language into sub-English."
[This should give the machine translation people something to
shoot for. -- KIL]
------------------------------
Date: Wed, 15 Jan 86 09:59 EST
From: Sonny Crockett <weltyc%rpicie.csnet@CSNET-RELAY.ARPA>
Subject: A good one on HAL
I just got the videotape of 2010, and figured out what Dr. Chandra
said about the reason HAL screwed up in the first mission. The major
problem most SF authors have is trying to come up with ways to express
advanced scientific things in a way that sounds very scientific...this
is a great one:
(Dr. Chandra has just finished explaining that HAL was given
conflicting orders, and was only trying to interpret them
the best he could)
"...HAL was trapped, more precisely he got caught in an H. Mobius
Loop, which is possible in autonomous call-seeking computers."
I thought it was funny, anyway...
-Chris
PS If anyone (like me) enjoys laughing at these kind of "pseudo-science"
phrases, I recommend watching Dr. Who (most famous for "Multi-dimensional
Time/Space Vortex"), and Star Trek ("Hodgkins Theory for Parallel Planet
Development," is one of my favorites). I'm sure there are many others
as well.
------------------------------
Date: Fri, 27 Dec 85 16:11:46 pst
From: Alain Fournier <fournier@su-navajo.ARPA>
Reply-to: fournier@Navajo.UUCP (Alain Fournier)
Subject: Logic & Humor - Proof that P = NP
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
> From: Len <Lattanzi@SUMEX-AIM.ARPA>
>
> $15 to anyone who can prove P = NP.
>
> #8↑)
> Len
This is an old one, but what the hell, it's $15.00:
-----------------------------------
| Exactly 2 of the statements |
| in these 3 boxes are false |
| |
-----------------------------------
-----------------------------------
| |
| P != NP |
| |
-----------------------------------
-----------------------------------
| The statement in the first |
| box is true. |
| |
-----------------------------------
It is left to the reader to show that assuming statement 1 is true leads
to a contradiction, so 1 is false, therefore 3 is false, and 2 has to be false.
The same conclusion is reached if the truth value of 3 is examined.
So 2 is false, and P=NP, QED.
The $15 can be sent in my name to my favourite charity, the Douglas Hofstadter
Home for the Terminally Self-Referential. An accompanying note should specify
that I requested that my gift should have no accompanying note.
------------------------------
Date: January 17, 1986, 5:51 PM.
From: <1gtlmkf%calstate.BITNET@WISCVM.WISC.EDU>
Subject: Othello Tournament Information
For anyone who might be interested in the upcoming Computer Othello
Tournament at CSU, Northridge on February 15-16:
Yoy may contact the tournament organizers over BITNET at the following
addresses --
Brian Swift (AGTLBJS@CALSTATE.BITNET)
Marc Furon (1GTLMKF@CALSTATE.BITNET)
Any questions or requests for information about the tournament may be
sent to either of us at the addresses above. We look forward to a
successful tournament and hope to hear from any and all interested Othello
programmers.
Thanks to Kurt Godden for sending the announcement to AILIST.
Marc Furon
Yes, Othello is a trademark of CBS Toys.
------------------------------
Date: Tue, 14 Jan 86 09:52:55 EST
From: "Richard E. Cullingford" <rec%gatech.csnet@CSNET-RELAY.ARPA>
Subject: new AI text
This note is an announcement of a new AI book which may be of interest
to the readers of this newsgroup. The book is "Natural Language
Processing: A Knowledge Engineering Approach," and it will be
available from Rowman & Allanheld, Publishers, of Totowa, NJ,
early in the spring of 1986. The work is intended as a practical
introduction to a theory and technology for building natural
language text-processing interfaces to database management
or expert reasoning systems. The text has been in use, in manuscript, in
courses at Princeton University and Georgia Tech for the past two years,
and extensive course materials have been developed. A software system,
the NLP Toolkit, is also available, through the publisher, that runs all
of the text's examples, and is suitable for experimentation by teachers
and programmers. The Toolkit contains representation design tools, a
conceptual analyzer, a conceptual generator, a large shared dictionary,
and a knowledge-base management support package.
Questions regarding the book and the programs can be addressed to
the author, Richard E. Cullingford, at the School of Information &
Computer Science, georgia Tech, Atlanta, GA 30332; at (404) 894-3227;
or gatech!rec (uucp) or rec@gatech (csnet). The book's table of contents
follows:
Table Of Contents
Natural Language Processing: A Knowledge Engineering Approach
Preface
Notes on the Use of This Book
Acknowledgments
Table of Contents
Table of Diagrams
Table of Figures
Chapter 1: Natural Language Processing: An Overview
1.0 Introduction
1.1 Related Fields: An Overview
1.1.1 NLP, Artificial Intelligence, and Knowledge Engineering
1.1.2 NLP and the Sciences of Language
1.2 NLP Efforts in AI
1.2.1 Early Efforts
1.2.2 Second Generation Systems
1.2.3 Third Generation Systems: A Look into the Future
1.3 Outline of the Book
Part I: A General-Purpose Language Processing Interface
Chapter 2: An Introduction to Representation Design
2.0 The Representation Problem
2.1 The Need for a Formal Representational System
2.2 Requirements on a Representational System
2.3 Introduction to ERKS
2.3.1 The ISA-Hierarchy of the Core System
2.3.2 Criteria for Selection of the Primitive Types
2.4 ERKS in LISP
2.5 The Maximal Inference-Free Paraphrase
2.6 Building a Model Corpus
2.7 A Simple Corpus
2.8 Primitive Actionals and Statives
2.9 Conceptual Relationships
2.10 A Representational Case Study: CADHELP
2.10.1 The CADHELP Microworld
2.10.2 A Typical Command
2.10.3 Knowledge Representation Issues
2.11 Summary
Chapter 3: Software Tools for Representation Design
3.0 Introduction
3.1 Navigating in an ISA-Hierarchy
3.2 Defining ERKS Types
3.3 Access and Updating Machinery
3.4 The def-wordsense Record Macro
3.5 Summary
Chapter 4: Surface-Semantic Conceptual Analysis
4.0 Introduction: Lexicon-Driven Analysis
4.1 A Simple Model of Sentence Structure
4.2 Production Systems, Requests, and Processing Overview
4.3 Request Pool Consideration
4.3.1 Analysis Environment
4.3.2 Request Types
4.4 Requests in More Detail
4.5 Morphological Fragments and "to be"
4.6 A Processing Example
4.7 Summary
Chapter 5: Problems in Conceptual Analysis
5.0 Introduction
5.1 Tri-Constituent Forms and Imbedded Sentences
5.1.1 Handling Indirect Objects
5.1.2 Infinitives and Gerunds
5.1.3 Relative Clauses
5.2 Prepositions and "to be," Revisited
5.3 Word Meaning Disambiguation
5.3.1 Pronominal Reference
5.4 Coordinate Constructions
5.5 Ellipsis Expansion
5.6 A Concluding Example
5.7 Summary
Chapter 6: Generating Natural Language from a Conceptual Base
6.0 Introduction
6.1 Overview of Generation Process
6.2 Dictionary Entries
6.3 Morphology and the Verb Kernel
6.3.1 Plural and Possessive Morphology
6.3.2 Subject-Verb Agreement and Modals
6.3.3 Tensing
6.3.4 Subject-Auxiliary Inversion
6.4 "Advanced" English Syntax
6.4.1 The Infinitive Construction
6.4.2 The Possessive Sketchifier
6.4.3 The Entity-Reference Sketchifier
6.5 A Processing Example
6.6 Summary
Part II: Building a Conversationalist
Chapter 7: Summarizing Knowledge Bases
7.0 Introduction: What to Say versus How to Say It
7.1 Explanations as Summaries
7.2 Explanations in CADHELP
7.3 Representational Overview
7.4 Concept Selection
7.5 An Example
7.6 Summary
Chapter 8: Knowledge-Base Management
8.0 Introduction
8.1 KB Organization
8.1.1 The Slot-Filler Tree
8.1.2 Slot-Filler Tree Construction
8.1.3 Index Quality
8.1.4 Best-First Ordering of KB Items
8.2 KB Search
8.2.1 The Tree Search Mechanism
8.3 Performance
8.4 Summary
Chapter 9: Commonsense Reasoning
9.0 Introduction: The Need for Reasoning in Language Understanding
9.1 Deductive Retrieval
9.2 YADR, Yet Another Deductive Retriever
9.3 The YADR Interface
9.4 The YADR Top Level
9.5 Logical Connectives in Antecedent Forms
9.6 Summary
Chapter 10: Putting It All Together: A Goal-Directed Conversationalist
10.0 Introduction
10.1 The ACE Microworld
10.2 A Model of Purposive Conversation
10.3 The Conversational Strategist
10.4 The Conversational Tactician
10.5 The Academic Scheduling Expert
10.6 More Problems in Language Understanding
10.6.1 Coordinate Constructions and Ellipses
10.6.2 Defining "And" for the Analyzer
10.6.3 Using Expectations during Analysis
10.7 More Problems in Language Generation
10.7.1 Asking Questions
10.7.2 Producing Coordinate Constructions
10.7.3 Generating Attributes, Absolute Times, Locales, and Names
10.8 Putting It All Together: A Session with ACE
10.9 Parting Words
Appendix I: The ERKS Types
Appendix II: Source for YADR, Yet Another Deductive Retriever
Appendix III: Glossary of Terms
Rich Cullingford
------------------------------
End of AIList Digest
********************
∂22-Jan-86 1833 LAWS@SRI-AI.ARPA AIList Digest V4 #13
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 Jan 86 18:32:49 PST
Date: Wed 22 Jan 1986 13:17-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #13
To: AIList@SRI-AI
AIList Digest Thursday, 23 Jan 1986 Volume 4 : Issue 13
Today's Topics:
Seminars - Controlling Backward Inference (SRI) &
Automata Approach to Program Verification (MIT) &
Problem Solving for Distributed Systems (MIT) &
Problem-Solving Languages (CSLI) &
Pointwise Circumscription (SU) &
Methodological Issues in Speech Recognition (Edinburgh) &
Intuitionistic Logic Programming (UPenn)
----------------------------------------------------------------------
Date: Wed 15 Jan 86 15:22:41-PST
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - Controlling Backward Inference (SRI)
CONTROLLING BACKWARD INFERENCE
Dave Smith (DE2SMITH@SUMEX-AIM)
Stanford University
11:00 AM, MONDAY, January 20
SRI International, Building E, Room EJ228 (new conference room)
Effective control of inference is a critical problem in Artificial
Intelligence. Expert systems have made use of powerful
domain-dependent control information to beat the combinatorics of
inference. However, it is not always feasible or convenient to
provide all of the domain-dependent control that may be needed,
especially for systems that must handle a wide variety of inference
problems, or must function in a changing environment. In this talk a
powerful domain-independent means of controlling inference is
proposed. The basic approach is to compute expected cost and
probability of success for different backward inference strategies.
This information is used to select between inference steps and to
compute the best order for processing conjuncts. The necessary
expected cost and probability calculations rely on simple information
about the contents of the problem solvers database, such as the number
of facts of each given form and the domain sizes for the predicates
and relations involved.
------------------------------
Date: 01/16/86 17:18:02
From: LISA at MC.LCS.MIT.EDU
Subject: Seminar - Automata Approach to Program Verification (MIT)
[Forwarded from the MIT bboard by SASW@MC.LCS.MIT.EDU.]
DATE: Thursday, January 23, 1986
TIME: 3:45 p.m......Refreshments
4:00 p.m......Lecture
PLACE: NE43 - 512A
"AN AUTOMATA-THEORETIC APPROACH TO
AUTOMATIC PROGRAM VERIFICATION"
MOSHE Y. VARDI
IBM Almaden Research Center
We describe an automata-theoretic approach to automatic verification of
concurrent finite-state programs by model checking. The basic idea underlying
the approach is that for any temporal formula PHI we can construct an automaton
A(PHI) that accepts precisely the computations that satisfy PHI. The
model-checking algorithm that results from this approach is much simpler and
cleaner than tableaux-based algorithms. We also show how the approach can be
extended to probabilistic concurrent finite-state programs.
Albert Meyer
Host
------------------------------
Date: Thu 16 Jan 86 11:49:34-EST
From: John J. Doherty <JOHN@XX.LCS.MIT.EDU>
Subject: Seminar - Problem Solving for Distributed Systems (MIT)
[Forwarded from the MIT bboard by SASW@MC.LCS.MIT.EDU.]
Date: January 24, 1986
Place: NE43-512A
Time: Refreshments 2:15 P.M.
Seminar 2:30 P.M.
Problem Solving for Distributed Systems:
An Uplifting Experiment in Progress
Herb Krasner
Member of the Technical Staff
MCC
Software Technology Project
This presentation describes the empirical studies efforts of the STP
Design Process Group focusing on models of the design process.
Preliminary findings of the "lift" experiment are reported, from which
a model of expert designer behavior and high leverage characteristics is
being derived. Goals of the pilot study, experimental setup, problem,
data analysis technique, hypotheses and subsequent activities are
discussed. The "lift" experiment was initiated to examine the early
stages of design problem solving behavior prototypical of users of
the futuristic software design environment LEONARDO. It addresses
the large effect of individual differences on productivity data,
and differs from previous studies in its focus on large-scale
design problems.
Host: Irene Greif
------------------------------
Date: Wed 15 Jan 86 16:52:56-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - Problem-Solving Languages (CSLI)
[Excerpted from the CSLI Newsletter by Laws@SRI-AI.]
CSLI ACTIVITIES FOR NEXT THURSDAY, January 23, 1986
2:15 p.m. CSLI Seminar
Computer Problem Solving Languages, Programming
Languages and Mathematics
Curtis Abbott (Abbott@xerox)
Computer Problem Solving Languages,
Programming Languages and Mathematics
by the
Semantically Rational Computer Languages Group
Programming languages are constrained by the requirement that their
expressions must be capable of giving rise to behavior in an
effective, algorithmically specified way. Mathematical formalisms,
and in particular languages of logic, are not so constrained, but
their applicability is much broader than the class of problems anyone
would think of ``solving'' with computers. This suggests, and I
believe, that formal languages can be designed that are connected with
the concerns associated with solving problems with computers yet not
constrained by effectiveness in the way programming languages are. I
believe that such languages, which I call ``computer problem solving
languages,'' provide a more appropriate evolutionary path for
programming languages than the widely pursued strategy of designing
``very high level'' programming languages, and that they can be
integrated with legitimate programming concerns by use of a
transformation-oriented methodology. In this presentation, I will
give several examples of how this point of view impacts language
design, examples which arise in Membrane, a computer problem solving
language I am in the process of designing. --Curtis Abbot
------------------------------
Date: 17 Jan 86 1639 PST
From: Vladimir Lifschitz <VAL@SU-AI.ARPA>
Subject: Seminar - Pointwise Circumscription (SU)
Pointwise Circumscription
Vladimir Lifschitz
Thursday, January 23, 4pm, MJH 252
(I have a few copies of the paper in my office, MJH 362).
ABSTRACT
Circumscription is logical minimization, that is, the
minimization of extensions of predicates subject to restrictions
expressed by predicate formulas. When several predicates are to be
minimized, circumscription is usually thought of as minimization with
respect to an order defined on vectors of predicates, and different
ways of defining this order correspond to different kinds of
circumscription: parallel and prioritized.
The purpose of this paper is to discuss the following
principle regarding logical minimization:
Things should be minimized one at a time.
This means, first of all, that we propose to express the
circumscription of several predicates by the conjunction of several
minimality conditions, one condition for each predicate. The
difference between parallel and prioritized circumscription will
correspond to different selections of predicates allowed to vary in
each minimization.
This means, furthermore, that we propose to modify the
definition of circumscription so that it will become an "infinite
conjunction" of "local" minimality conditions; each of these
conditions expresses the impossibility of changing the value of the
predicate from True to False at one point. (Formally, this "infinite
conjunction" will be represented by means of a universal quantifier).
This is what we call "pointwise circumscription".
We argue that this approach to circumscription is conceptually
simpler than the traditional ``global'' approach and, at the same
time, leads to generalizations with the additional flexibility and
expressive power needed in applications to the theory of commonsense
reasoning. Its power is illustrated, in particular, on a problem posed
by Hanks and McDermott, which apparently cannot be solved using other
existing formalizations of non-monotonic reasoning.
------------------------------
Date: Tue, 21 Jan 86 10:44:43 GMT
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@cs.ucl.ac.uk>
Subject: Seminar - Methodological Issues in Speech Recognition (Edinburgh)
Department of Artificial Intelligence, University of Edinburgh,
and Artificial Intelligence Applications Institute
Edinburgh Artificial Intelligence Seminars
Speaker - Dr. Henry Thompson, Dept. of A.I., University of Edinburgh
Title - Methodological issues in Speech Recognition
Abstract - What methodological issues arise from the belief that fully
automatic high quality unrestricted speech recognition is impossible,
when one has overall technical responsibility for a multi-year
multi-million pound Alvey Large Scale Demonstrator? I will give a
brief overview of the overall structure of the project, and discuss at
more length two basic issues:
- Why top-down vs. bottom-up is the wrong question, and selectional vs.
instructional interaction is the right question, and what the right
answer is.
- How giving up on fully automatic ... changes the way you do things
in surprising ways.
------------------------------
Date: Tue, 21 Jan 86 23:27 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Intuitionistic Logic Programming (UPenn)
From: Dale Miller <Dale@UPenn>
UPenn Math-CS Logic Seminar
An Intuitionistic Basis for Extending Logic Programming
Dale Miller
Tuesday 28 Jan 86, 4:30 - 6:00, 4E17 DRL
There is a very natural extension to Horn clauses which involves extending
the use of implication. This extension has a natural operational semantics
which is not sound with respect to classical logic. We shall show that
intuitionistic logic, via possible worlds semantics, provides the necessary
framework to give a sound and complete justification of this operational
semantics. This will be done by providing a least fix-point construct of a
Kripke-model. We shall also show how this logic can be used to provide
logic programming languages with a logical foundations for each of the
following programming features: program modules, recursive call memo-izing,
and local environments (permanent vs. temporary asserts). This extension to
logic programming can also simulate various features of negation - not
through logical incompleteness as in negation-by-failure, but through
constructing proofs of a certain kind.
------------------------------
End of AIList Digest
********************
∂24-Jan-86 1537 LAWS@SRI-AI.ARPA AIList Digest V4 #14
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 24 Jan 86 15:36:44 PST
Date: Fri 24 Jan 1986 10:12-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #14
To: AIList@SRI-AI
AIList Digest Friday, 24 Jan 1986 Volume 4 : Issue 14
Today's Topics:
Conferences - Knowledge Acquisition for KB Systems &
Symposium on Logic Programming &
Office Information Systems '86 &
Uncertainty and KBS,
Course - Object-Oriented Programming
----------------------------------------------------------------------
Date: Thu, 16 Jan 86 11:20:57 pst
From: bcsaic!john@uw-june.arpa
Subject: Workshop - Knowledge Acquisition for KB Systems
Call for Participation
KNOWLEDGE ACQUISITION FOR KNOWLEDGE-BASED SYSTEMS WORKSHOP
Sponsored by
AMERICIAN ASSOCIATION FOR ARTIFICIAL INTELLIGENCE (AAAI)
Banff, CANADA
November 3-7, 1986
The bottleneck in the process of building knowledge-based systems is usally
acquiring the appropriate problem solving knowledge. The objective of this
workshop is to assemble theoriticians and practioners of AI who recognize the
need for developing systems that assist the knowledge acquisition process.
To encourage vigorous interaction and exchange of ideas the workshop will be
kept small - about 30 participants. There will be individual presentations
and ample time for technical discussions. An attempt will be made to define
the state-of-the-art and the future research needs.
Papers are invited for consideration in all aspects of knowledge acquisition
for knowledge-based systems, including (but not restricted to):
o Transfer of expertise - systems which interview experts to obtain and
structure knowledge.
o Transfer of expertise - manual knowledge engineering interviewing
methods and techniques.
o Induction of knowledge from examples.
o Knowledge acquisition methodology.
Four copies of an extended abstract (up to 8 pages, double-spaced) or a
full-length paper should be sent to the workshop chairman before May 1, 1986.
Acceptance notices will be mailed by July 1. Revised abstracts should be
returned to the chairman by October 1, 1986, so that they may be bound
together for distribution at the workshop. Potential attendees should also
indicate their interest in chairing or participating in special topic
discussion sessions.
Co-Chairmen:
John Boose (send papers here)
Boeing Artificial Intelligence Center
Boeing Computer Services
M/S 7A-03
PO Box 24346
Seattle, Washington, USA, 98124
(206) 763-5811
and
Brian Gaines
Department of Computer Science
University of Calgary
2500 University Dr. NW
Calgary, Alberta, Canada, T2N 1N4
(403) 220-6015
Program and local arrangements comittee:
Jeff Bradshaw, Boeing Computer Services
William Clancey, Stanford University
Cathy Kitto, Boeing Computer Services
Janusz Kowalik, Boeing Computer Services
John McDermott, Carnegie-Mellon University
Ryszard Michalski, Univ. of Illinois (tentative)
Art Nagai, Boeing Computer Services
Mildred Shaw, University of Calgary
John Boose, Boeing Artficial Intelligence Center
7A-03, PO Box 24346,
Seattle, Wa., 98124, (206) 763-5811
------------------------------
Date: Thu 9 Jan 86 09:04:20-MST
From: "Robert M. Keller" <Keller@UTAH-20.ARPA>
Subject: Conference - Symposium on Logic Programming
'86 SLP
Call for Papers
Third Symposium on Logic Programming
Sponsored by the IEEE Computer Society
September 21-25, 1986
Westin Hotel Utah
Salt Lake City, UT
The conference solicits papers on all areas of logic programming, including,
but not confined to:
Applications of logic programming
Computer architectures for logic programming
Databases and logic programming
Logic programming and other language forms
New language features
Logic programming systems and implementation
Parallel logic programming models
Performance
Theory
Please submit full papers, indicating accomplishments of substance and novelty,
and including appropriate citations of related work. The suggested page limit
is 25 double-spaced pages. Send eight copies of your manuscript no later than
15 March 1986 to:
Robert M. Keller
SLP '86 Program Chairperson
Department of Computer Science
University of Utah
Salt Lake City, UT 84112
Acceptances will be mailed by 30 April 1986. Camera-ready copy will be due by
30 June 1986.
Conference Chairperson Exhibits Chairperson
Gary Lindstrom, University of Utah Ross Overbeek, Argonne National Lab.
Tutorials Chairperson Local Arrangements Chairperson
George Luger, University of New Mexico Thomas C. Henderson, University of Utah
Program Committee
Francois Bancilhon, MCC William Kornfeld, Quintus Systems
John Conery, University of Oregon Gary Lindstrom, University of Utah
Al Despain, U.C. Berkeley George Luger, University of New Mexico
Herve Gallaire, ECRC, Munich Rikio Onai, ICOT/NTT, Tokyo
Seif Haridi, SICS, Sweden Ross Overbeek, Argonne National Lab.
Lynette Hirschman, SDC, Paoli Mark Stickel, SRI International
Peter Kogge, IBM, Owego Sten Ake Tarnlund, Uppsala University
------------------------------
Date: Tue, 21 Jan 86 19:22 EST
From: Hewitt@MIT-MC.ARPA
Subject: Conference - OIS-86
Because of the delay in the distribution of the call for papers for OIS-86 in
the Newsletter, we have decided to postpone the deadline for paper submission
from February 1 to March 1, 1986 in order to satisfy the requirements for
broad distribution of the call.
Enclosed please find the updated call for papers which reflects this change:
******************* C A L L F O R P A P E R S
* * ----------------------------------------------
* * Third ACM Conference On
* * OFFICE INFORMATION SYSTEMS
* OIS-86 *
* * October 6-8, 1986
* * Biltmore Plaza Hotel
* * Providence, RI
******************* -------------------------------------------------
General Chair: Carl Hewitt, Topics appropriate for this
MIT conference include (but are not
restricted to) the following as they
Program Chair: Stanley Zdonik, relate to OIS:
Brown University
Technologies including Display, Voice,
Treasurer: Gerald Barber, Telecommunications, Print, etc.
Gold Hill Computers
Human Interfaces
Local Arrangements: Andrea Skarra,
Brown University Deployment and Evaluation
An interdisciplinary conference on System Design and Construction
issues relating to office
information systems (OIS) sponsored Goals and Values
by ACM/SIGOIS in cooperation with
Brown University and the MIT Distributed Services and Applications
Artificial Intelligence Laboratory.
Submissions from the following Knowledge Bases and Reasoning
fields are solicited:
Distributed Services and Applications
Anthropology
Artificial Intelligence Indicators and Models
Cognitive Science
Computer Science Needs and Organizational Factors
Economics
Management Science Impact of Computer Integrated
Psychology Manufacturing
Sociology
The program committee includes:
Bob Allen Ray Panko
Bellcore University of Hawaii
Guiseppe Attardi Robert Rosin
University of Pisa Syntrex
James Bair Erik Sandewall
Hewlett Packard Linkoping University
Gerald Barber Walt Scacci
Gold Hill Computers USC
Peter de Jong Andrea Skarra
MIT Brown University
Irene Greif Susan Leigh Star
MIT Tremont Research Institute
Sidney Harris Luc Steels
Georgia State University University of Brussels
Carl Hewitt Sigfried Treu
MIT University of Pittsburgh
Heinz Klein Dionysis Tsichritzis
SUNY University of Geneva
Fred Lochovsky Eleanor Wynn
University of Toronto Brandon Interscience
Fanya Montalvo Aki Yonezawa
MIT Tokyo Institute of Technology
Naja Naffah Stanley Zdonik
Bull Transac Brown University
Margrethe Olson
NYU
Professor J.C.R. Licklider of MIT will be the keynote speaker.
Unpublished papers of up to 5000 words (20 double-spaced pages) are
sought. The first page of each paper must include the following
information: title, the author's name, affiliations, complete mailing
address, telephone number and electronic mail address where
applicable, a maximum 150-word abstract of the paper, and up to five
keywords (important for the correct classification of the paper). If
there are multiple authors, please indicate who will present the paper
at OIS-86 if the paper is accepted. Proceeedings will be distributed
at the conference and will later be available from ACM. Selected
papers will be published in the ACM Transactions on Office Information
Systems.
Please send eight (8) copies of the paper (which must arrive by March
1, 1986) to:
Prof. Stan Zdonik
OIS-86 Program Chair
Computer Science Department
Brown University
P.O. Box 1910
Providence, RI 02912
DIRECT INQUIRIES TO: Margaret H. Franchi (401) 863-1839.
IMPORTANT DATES
Deadline for Paper Submission (postponed 1 mo.) March 1, 1986
Notification of Acceptance: April 30, 1986
Deadline for Final Camera-Ready Copy: July 1, 1986
Conference Dates: October 6-8, 1986
------------------------------
Date: Thu 23 Jan 86 16:25:50-PST
From: RUSPINI@SRI-AI.ARPA
Subject: Conference - International Conference on Uncertainty and KBS
ANNOUNCEMENT
AND
CALL FOR PAPERS
INTERNATIONAL CONFERENCE ON
INFORMATION PROCESSING
AND
MANAGEMENT OF UNCERTAINTY
IN KNOWLEDGE-BASED SYSTEMS
Paris, France
June 30 - July 4 1986
Supported by:
Ministere de la Recherche
et de la Technologie
AFCET
Centre Nationale de la Recherche Scientiique
Societe Francaise de Theorie de l'Informationn
Chairpersons:
Bernadette Bouchon (France)
Ronald R. Yager (United States)
Purpose of the Conference:
The aim of this Conference is to bring together researchers working
on information, uncertain data processing and related topics. The
management of uncertainty is at the heart of many knowledge-based
systems and a number of approaches have been developed for
representing these types of information.
It is hoped that this Conference will provide a useful exchange
between practitioners and theoreticians using these methods.
INTERNATIONAL PROGRAM COMMITTEE:
J. Bezdek (U.S.A.) S.Ovchinnikov (U.S.A.)
C. Carlsson (Finland) J. Pearl (U.S.A.)
A. De Luca (Italy) B. Picinbono (France)
Deng Ju-Long (China) J. Pitrat (France)
H.J. Efstathiou (G.B.) D. Ralescu (U.S.A.)
C. Gueguen (France) E. Ruspini (U.S.A.)
S. Guiasu (Canada) A.P. Sage (U.S.A.)
M.M. Gupta (Canada) G. Shafer (U.S.A.)
J. Kacprzyk (Poland) J.C. Simon (France)
J.L. Lauriere (France) M. Sugeno (Japan)
G. Longo (Italy) E. Trillas (Spain)
J. Lowrance (U.S.A.) R. Valee (France)
H.T. Nguyen (U.S.A.) L.A. Zadeh (U.S.A.)
H.J. Zimmermann (Germany) H. Akdag (France)
M. Mugur-Schacter (France) G. Cohen (France)
H. Prade (France) D. Dubois (France)
E. Sanchez (France) P. Godlewski (France)
M. Terrenoire (France)
TOPICS:
Knowledge Representation Uncertainty in Expert Systems
Decision Making with Uncertainty Fuzzy Logic and Fuzzy Reasoning
Representation of Commonsense Knowledge
Possibility Measures Mathematical Theory of Evidence
Combinatorial Information Theory
Shannon Theory Questionnaire Theory
Pattern Recognition and Image Processing
Clustering and Classification Information Security
Fuzzy Sets in Operations Research
SUBMISSION INFORMATION
Papers will be selected on the basis of a 500 word abstract.
Communications will be in FRENCH or ENGLISH.
All abstracts should be sent in triplicate to the Conference Secretary:
Professor G. Cohen
International Conference I.P.M.U.
E.N.S.T.
46, rue Barrault
75013 PARIS, FRANCE
SUBMISSION DEADLINE: FEBRUARY 15, 1986
------------------------------
Date: Wed, 22 Jan 86 15:58:24 PST
From: tektronix!mako.TEK!jans@ucbvax.berkeley.edu
Subject: Course - Object-Oriented Programming
Tektronix will be holding four day Introductory Smalltalk-80 classes at three
locations in February, March and April, 1986. This class will introduce the
student to object oriented programming, enable the student to extend the
Smalltalk language by adding new methods and classes, and prepare the student
to write simple applications, using the model-view-controller paradigm. Class
notes, two textbooks, and four lunches are included. Participants should be
experienced in at least one high level programming language.
Also planned is an Advanced Smalltalk-80 class, which will enable the student
to utilize advanced techniques of Smalltalk, including advanced model-view-
controller concepts, project management and team programming, multi-process
programming, and external processes and language interfaces. Participants will
be expected to understand the major classes of Smalltalk and should have three
to six months of Smalltalk programming experience.
Schedule:
Intorductory: Gaithersburg, Maryland 18-21 February 1986
Introductory: Dallas, Texas 17-20 March 1986
Advanced: Beaverton, Oregon 1- 4 April 1986
Introductory: Irvine, California 14-17 April 1986
Contact:
Sandi Unger, (503)685-2941 for registration information, or
Mary Wells, (503)685-2947 for information on course content.
:::::: Artificial Intelligence Machines --- Smalltalk Project ::::::
:::::: Jan Steinman Box 1000, MS 60-405 (w)503/685-2956 ::::::
:::::: tektronix!tekecs!jans Wilsonville, OR 97070 (h)503/657-7703 ::::::
------------------------------
End of AIList Digest
********************
∂24-Jan-86 2029 LAWS@SRI-AI.ARPA AIList Digest V4 #15
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 24 Jan 86 20:29:46 PST
Date: Fri 24 Jan 1986 10:20-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #15
To: AIList@SRI-AI
AIList Digest Friday, 24 Jan 1986 Volume 4 : Issue 15
Today's Topics:
Query - RT/PC Common Lisp,
Binding - Robert Leary @ San Diego Supercomputer Center,
Corrections - "Meta" Quote & MRS,
AI Tools - Representation of Uncertainty in MRS,
Policy - Gatewaying of AIList from Usenet Net.AI &
Relevance of Theoretical Computer Science to AI
----------------------------------------------------------------------
Date: 22 Jan 1986 19:14-EST
From: NGALL@G.BBN.COM
Subject: RT/PC Common Lisp Query
Has anyone heard anything about a Common Lisp for the RT/PC (IBM's
new Risc Engineering Workstation)? (By Lucid perhaps?)
-- Nick
------------------------------
Date: 22 January 1986 1326-PST (Wednesday)
From: west@nprdc.arpa (Larry West)
Subject: Robert Leary @ San Diego Supercomputer Center
In re Dallas Webster's short note about Dr. Robert Leary [AIList V4 #11]:
First, a minor correction: UCSD's zip code is 92093, not 92903.
Dr. Leary is with GA Technologies (San Diego) which operates the
Supercomputer center for the University. You might be able to reach
him thru UCSD, but I think GA Technologies would be a better bet.
The phone book lists:
GA Technologies, Inc.
10955 John Jay Hopkins Dr.
and this is my guess:
La Jolla, CA 92037
Phone (general info): 619-455-3000
An old, but possibly still valid, net address for him is:
leary%gav@lll-mfe.ARPA
Larry West, UCSD Institute for Cognitive Science, west@nprdc.ARPA
------------------------------
Date: Thu, 23 Jan 1986 08:31:26
From: rjb%allegra.btl.csnet@CSNET-RELAY.ARPA
Subject: V4 #11: Quote about "meta"
(Regarding W. Hamscher's response in V4 #11 to a query about MRS:)
Please - let's give credit where it's deserved: "Anything you can do,
I can do meta" should be attributed to David Levy (via Brian Smith).
I merely used it in a talk at AAAI-80 (hopefully attributing it
to David).
Ron Brachman
------------------------------
Date: Thu, 23 Jan 86 17:13:58 CST
From: veach%ukans.csnet@CSNET-RELAY.ARPA
Subject: Correction.
In a recent issue the full name MRS was incorrectly reported.
MRS = "Modifiable Representation System"
(source - "MRS Manual", Michael R. Genesereth, et. al.
1980, Stanford Heuristic Programming Project)
------------------------------
Date: Thu 23 Jan 86 16:30:46-PST
From: Yung-Jen Hsu <Hsu@SU-SUSHI.ARPA>
Subject: MRS distribution & maintenance
Walter,
Contrary to what you said in your recent message, which appeared in
AILIST V4 #11, about the distribution and maintenace of MRS, I'm NOT the
person in charge of the matter. If anyone would like to get a copy of
the MRS system, I believe that the best person(s) to contact is Arthur
Whitney (whitney@sumex) and/or Michael Genesereth (genesereth@sumex).
Best regards.
Jane Hsu
------------------------------
Date: 22 Jan 86 16:00:21 PST (Wed)
From: whiting@sri-spam
Subject: MRS info.
Re:
Date: Fri, 17 Jan 86
From: Tom Scott
Subject: MRS
Can it (MRS) handle ... Certainty factors?
The MRS that is available for common distribution doesn't have the
facility for dealing with uncertainty. An implementation of Dempster's
Rule has been incorporated into a non-official version. There are some
fairly strong restrictions on this version, but an application using
this version has been implemented. It seems the situation is more
"MRS's official release doesn't include the ability to deal with
uncertainty, YET", than "MRS can't handle certainty factors".
[Note: "Can it handle Certainty factors?" has been generalized to "Does
MRS have the ability to deal with uncertainty?". Certainty factors are
generally associated with MYCIN/E-MYCIN's methodology for dealing with
uncertainty.]
As an aside, Stuart Russell Esq., has put together a manual which is
quite good, "The Compleat (sic) Guide to MRS", Stanford Knowledge
Systems Laboratory Report No. KSL-85-12.
Kevin Whiting
------------------------------
Date: Wed, 22 Jan 86 16:40:07 PST
From: Kenneth I. Laws <AIList-Request@SRI-AI>
Subject: Resumption of AIList Gatewaying
Eric Fair, the Berkeley Postmaster, has been handling the gatewaying
of AIList to the Usenet mod.ai distribution. [The "mod" stands for
"moderated".] He has offered to gateway net.ai submissions back to
AIList if we wish.
AIList used to have such an arrangement until our SRI-UNIX gateway
broke. At that time AIList traffic dropped by about 50%, primarily
through the loss of cross-net discussions (as opposed to seminar
and conference announcements). I do not know whether net.ai continues
to carry a great deal of non-AIList traffic, nor whether there would
be an increase in useful interchanges if we again make it easy for
academic/foreign Usenet readers to submit material to AIList. I do not
know whether Usenet readers >>like<< having a "private" discussion
channel in addition to the AIList stream that they get.
I expect that the connection would increase my workload, but I am
willing to take on the moderation as long as no one objects to my just
ignoring net.ai comments that do not seem relevant. (Sending explict
rejection notices involves numerous hassles, and hardly seems worth
the effort since the submitter has already reached his net.ai audience
and would be unaware of whether AIList also carried the text.)
So, does anyone feel strongly one way or the other? The default is
to go ahead with the connection, at least until it proves unmanageable.
(I would rather drop seminar notices than lose personal interaction.)
-- Ken Laws
------------------------------
Date: Wed, 22 Jan 86 15:57:16 -0200
From: mcvax!lifia!rit@seismo.CSS.GOV (Jean-Francois Rit )
Subject: Relevance of Theoretical Computer Science to AI
To: AIList-REQUEST@SRI-AI.ARPA
In article <8601151921.AA17286@ucbvax.berkeley.edu> you write:
> Today's Topics:
> Description - European Association for Theoretical Computer Science
> ....
> In our experience the only reason that a computer
> scientist who is either actively engaged or interested in
> theoretical computer science is not a member of EATCS...
I found this message quite interesting...At least because it made me wonder
why it was in mod.ai!
Our Laboratory (LIFIA) clusters research groups in Theoretical CS and in AI.
There are many CS labs in Grenoble, and AI could have as well been separated
from TCS. However I find it quite hard to define any common interest other
than "doing the soft for the future super-computer" which hardly leads to any
tight cooperation (this is my personnal opinion only).
Furthermore, one of the leaders of EATC is M. Nivat :
>TCS Editor: M. Nivat, Paris
>Past Presidents: M. Nivat, Paris (1972-1977)
A semestrial course on CS of whom I attended, where I learnt much (:-) about
automata and grammars but never heard the words AI. (it is said in my lab that
he is not a strong supporter of AI but these are rumors that I could not
personnally verify).
So, are there any AI researchers who feel actively engaged or interested in
TCS? ↑↑↑↑↑↑↑↑
For example, in working or publishing in one of the following fields :
> Typical topics discussed during recent ICALP conferences are:
> computability, automata theory, formal language theory, analysis of
> algorithms, computational complexity, mathematical aspects of
> programming language definition, logic and semantics of programming
> languages, foundations of logic programming, theorem proving, software
> specification, computational geometry, data types and data structures,
> theory of data bases and knowledge based systems, cryptography, VLSI
> structures, parallel and distributed computing, models of concurrency
> and robotics.
↑↑↑↑↑↑↑↑ (oh oh! robotics indeed?)
> ... Behind all this lie the major problems of under-
> standing the nature of computation and its relation to computing
> methodology. While "Theoretical Computer Science" remains mathematical
> and abstract in spirit, it derives its motivation from the problems of
> practical computation.
I don't feel that a major problem for AI researchers is understanding the
nature of computation, I think the AI point of view is much (maybe too much)
broader or at least OPEN toward The "real" universe.
I repeat I'm not opposed to TCS, I just wonder which real links bound TCS and
AI. I'd like to know what other AI , and TCS (if they read mod.ai (-:)
researchers think about that (that's why I submit this to the news).
Jean-Francois Rit
Laboratoire d'Informatique Fondamentale et d'Intelligence Artificielle
BP 68
38402 Saint-Martin d'Heres cedex
Disclaimer: This is only my postal address!
UUCP: ...{mcvax,vmucnam}!lifia!rit
[I was the one who forwarded the message to AIList -- perhaps
I have been unduly influenced by the AI "neats" here at SRI-AI.
I am a "scruffy" (or hacker or pragmatist), but there seem to be
plenty of people in AI who hold that the problems will fall apart if
and only if we solve the underlying difficult cases rigorously. There
are those in the Representation and Reasoning Group here at the AI
Center who consider automata theory an appropriate basis for robotic
perception and action. Theorem proving is popular with our planning
group and also underlies part of our natural language understanding
effort. Grammar and formal language theories are used in NL
understanding, although I don't know whether they are considered AI.
Semantics of [certain] programming languages has been a topic on
AIList and on the Prolog Digest, and may generate renewed interest
when CommonLoops and other object-oriented languages become commonly
available. Foundations of logic programming is an obvious match, and
computational geometry is important to those of us in vision research.
The theory of data bases is intermingled with data abstraction and
conceptual modeling, as well as with practical development of efficient
Prolog systems; it will become more important to AI as knowledge-based
systems become larger. Parallel and distributed computing (or, at least,
problem solving) are evidently of interest to the AIers on the PARSYM
discussion list. Models of concurrency are important in multiagent
planning.
A Stanford professor has requested that I not forward any more articles
from the "Theory Net" distribution. I will comply, but I do not agree
that AI is (or should be) disjoint from CS theory. The results of
CS research will be of use in AI, and the needs an theories of AI might
well inspire further CS research. -- KIL]
------------------------------
End of AIList Digest
********************
∂29-Jan-86 2343 LAWS@SRI-AI.ARPA AIList Digest V4 #16
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 Jan 86 23:43:10 PST
Date: Wed 29 Jan 1986 20:41-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #16
To: AIList@SRI-AI
AIList Digest Thursday, 30 Jan 1986 Volume 4 : Issue 16
Today's Topics:
Journal Issue - Blackboard Models for AI in Engineering,
Seminars - Naive Physics: Knowledge in Pieces (UCB) &
Term Rewriting, Theorem Proving, Logic Programming (CSLI) &
The Algebra of Time Intervals (SRI) &
Machine Learning and Economics (RU) &
Semi-Applicative Programming (UPenn) &
Integrating Syntax and Semantics (Edinburgh) &
Feature Structures in Unification Grammars (UPenn)
----------------------------------------------------------------------
Date: Mon, 27 Jan 86 23:21:15 est
From: Michael Bushnell <mb@ohm.ECE.CMU.EDU>
Subject: Call for Papers - Blackboard Models for AI in Engineering
======================================================================
| CALL FOR PAPERS for the |
| |
| INTERNATIONAL JOURNAL FOR |
| |
| ARTIFICIAL INTELLIGENCE IN ENGINEERING |
| October, 1986 Special Issue |
| |
| Guest Editors: |
| Pierre Haren, INRIA, France. |
| Mike Bushnell, Carnegie-Mellon University |
| |
| Manuscripts in US should be sent to: |
| Mike Bushnell |
| Department of Electrical and Computer Engineering |
| Carnegie-Mellon University |
| Pittsburgh, PA 15213 |
| USA |
| (ARPAnet: mb@ohm.ece.cmu.edu) |
| Deadline for receiving manuscripts: April 1st, 1986 |
| |
======================================================================
We are soliciting papers for a special issue of the International Journal
for AI in Engineering. This issue will focus on the AI Blackboard model, as
applied to engineering problems. Papers describing the application of the
Blackboard model to problems in the disciplines of Electrical Engineering,
Computer Engineering, Chemical Engineering, Civil Engineering, Mechanical
Engineering, Metallurgy, Materials Science, Robotics, and areas of Computer
Science are appropriate. Papers describing applications to other
disciplines may also be appropriate. In addition, papers discussing AI
tools that are particularly appropriate for Engineering applications are
most welcome, along with book reviews, letters to the editor, conference
reports, and other relevant news.
All submissions must be original papers written in English and will be
refereed. The copyright of published papers will be vested with the
publishers. Contributions will be classified as research papers and
research notes, of up to 5000 equivalent words, or as review articles of up
to 10,000 equivalent words. Authors wishing to prepare review articles
should contact the editors in advance. Manuscripts should be typed
double-spaced with wide margins, on one side of the paper only, and
submitted in triplicate. The article should be preceded by a summary of
not more than 200 words describing the entire paper. A list of key words is
also required. The article title should be brief and stated on a separate
page with the author's names and addresses.
------------------------------
Date: Wed, 22 Jan 86 16:47:34 PST
From: admin%cogsci@BERKELEY.EDU (Cognitive Science Program)
Subject: Seminar - Naive Physics: Knowledge in Pieces (UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Spring 1986
Cognitive Science Seminar - IDS 237B
Tuesday, January 28, 11:00 - 12:30
[NB. New Location] 2515 Tolman Hall
Discussion: 12:30 - 1:30 [location TBA]
``Knowledge in Pieces''
Andrea A. diSessa
Math Science and Technology, School of Education
Abstract
Naive Physics concerns expectations, descriptions and
explanations about the way the physical world works that people
seem spontaneously to develop through interaction with it. A
recent upswing in interest in this area, particularly concern-
ing the relation of naive physics to the learning of school
physics, has yielded significant interesting data, but little
in the way of a theoretical foundation. I would like to pro-
vide a sketch of a developing theoretical frame together with
many examples that illustrate it.
In broad strokes, one sees a rich but rather shallow (in a
sense I will define), loosely coupled knowledge system with
elements that originate often as minimal abstractions of common
phenomena. Rather than a "change of theory" or even a shift in
content of the knowledge system, it seems that developing
understanding of classroom physics may better be described in
terms of a change in structure that includes selection and
integration of naive knowledge elements into a system that is
much less data-driven, less context dependent, more capable of
"reliable" (in a technical sense) descriptions and explana-
tions. In addition I would like to discuss some hypothetical
changes at a systematic level that do look more like changes of
theory or belief. Finally, I would like to consider the poten-
tial application of this work to other domains of knowledge,
and the relation to other perspectives on the problem of
knowledge.
------------------------------
Date: Wed 22 Jan 86 17:32:26-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - Term Rewriting, Theorem Proving, Logic Programming (CSLI)
[Excerpted from the CSLI Newsletter by Laws@SRI-AI.]
CSLI ACTIVITIES FOR NEXT THURSDAY, January 30, 1986
2:15 p.m. CSLI Seminar
Ventura Hall Term Rewriting Systems and Application to Automated
Trailer Classroom Theorem Proving and Logic Programming
Helene Kirchner (Kirchner@sri-ai)
Term Rewriting Systems and Application to
Automated Theorem Proving and Logic Programming
Helene Kirchner
Term rewriting systems are sets of rules (i.e. directed equations)
used to compute equivalent terms in an equational theory. Term
rewriting systems are required to be terminating and confluent in
order to ensure that any computation terminates and does not depend on
the choice of applied rules. Completion of term rewriting systems
consists of building, from a set of non-directed equations, a
confluent and terminating set of rules that has the same deductive
power. After a brief description of these two notions, their
application in two different domains are illustrated:
- automated theorem proving in equational and first-order
logic,
- construction of interpretors for logic programming languages
mixing relational and functional features.
------------------------------
Date: Thu 23 Jan 86 11:50:10-PST
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - The Algebra of Time Intervals (SRI)
THE ALGEBRA OF TIME INTERVALS
Peter Ladkin (LADKIN@KESTREL)
Kestrel Institute
11:00 AM, MONDAY, January 27
SRI International, Building E, Room EJ228 (new conference room)
We build on work of James Allen (Maintaining Knowledge about Temporal
Intervals, CACM Nov 1983), who suggested a calculus of time intervals.
Allen's intervals are all convex (no gaps). We shall present a
taxonomy of *natural* relations between non-convex [i.e.,
non-contiguous] intervals, and illustrate the expressiveness of this
subclass, with examples from the domain of project management. In
collaboration with Roger Maddux, we have new mathematical results
concerning both Allen's calculus, and our own. We shall present as
many of these as time permits.
The talk represents work in progress. We are currently designing and
implementing a time expert for the Refine system at Kestrel Institute,
which will include the interval calculus.
------------------------------
Date: 22 Jan 86 09:18:06 EST
From: Tom <mitchell@RED.RUTGERS.EDU>
Subject: Seminar - Machine Learning and Economics (RU)
[Forwarded from the Rutgers bboard by Laws@SRI-AI.]
ML Colloquium talk
Title: Market Traders: Intelligent Distributed Systems
In an Open World
Speaker: Prof. Spencer Star
Laval University, Quebec
Date: Friday, Jan 24
Time: 11 am
Location: Hill 423
Professor Spencer Star is a computer scientist/economist who
works on simulating economic markets. He will be spending the coming
year on sabbatical at Rutgers to work on incorporating a machine
learning component into his current market simulations. He is
visiting now in order to meet the department and to get some feedback
on his current research ideas on learning. Below is part of an
abstract from his recent paper. [...]
-Tom Mitchell
Market Traders: Intelligent Distributed Systems In an Open World
Although markets are at the heart of modern microeconomics, there has
been relatively little attention paid to disequilibriun states and to
the decision-making rules used by traders within markets. I am
interested in the procedures that traders use to determine when and
how much they will bid, how they adapt their behaviour to a changing
market environment, and the effects of their adaptive behaviour on the
market's disequilibrium path. This paper reports on research to study
these questions with the aid of a computer program that represents a
market with interacting and independent knowledge-based traders. The
program is callled TRADER.
In a series of experiments with TRADER I find that market efficiency
requires a minimum number of intelligent traders with a capacity to
learn, but when their knowledge is reflected in the market bids and
asks, naive traders can enter the markets and sometimes do better than
the expert traders. Moreover, the entrance of naive traders in a
market that is already functioning efficiently does not degrade the
market's performance. Since learning by independent agents appears to
be a key element in understanding and using open systems, the focus of
future research will be on studying learing and adaptive processes by
intelligent agents in open systems.
------------------------------
Date: Tue, 28 Jan 86 15:41 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Semi-Applicative Programming (UPenn)
SEMI-APPLICATIVE PROGRAMMING: AN EXAMPLE
N.S Sridharan
BBN Labs, AI Department, Cambridge MA
3pm Thursday, January 30, 1986
216 Moore, University of Pennsylvania
Most current parallel programming languages are designed with a sequential
programming language as the base language and have added constructs that allow
parallel execution. We are experimenting with an applicative base language
that has implicit parallelism everywhere, and then we introduce constructs that
inhibit parallelism. The base language uses pure LISP as a foundation and
blends in interesting features of Prolog and FP. Proper utilization of
available machine resources is a crucial concern in functional programming. We
advocate several techniques of controlling the behavior of functional programs
without changing their meaning or functionality: program annotation with
constructs that have benign side-effects, program transformation and adaptive
scheduling. This combination yields us a semi-applicative programming language
and an interesting programming methodology.
In this talk we give some background information on our project, its aims and
scope and report on work in progress in the area of parallel algorithms for
context-free parsing.
Starting with the specification of a context-free recognizer, we have been
successful in deriving variants of the recognition algorithm of
Cocke-Kasami-Younger. One version is the CKY algorithm in parallel. The
second version includes a top-down predictor to limit the work done by the
bottom-up recognizer. The third version uses a cost measure over derivations
and produces minimal cost parses using a dynamic programming technique. In
another line of development, we arrive at a parallel version of the Earley
algorithm.
------------------------------
Date: Wed, 29 Jan 86 10:19:18 GMT
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@cs.ucl.ac.uk>
Subject: Seminar - Integrating Syntax and Semantics (Edinburgh)
EDINBURGH AI SEMINARS
Date: 29th January l985
Time: 2.00 p.m.
Place: Department of Artificial Intelligence
Seminar Room - F10
80 South Bridge
EDINBURGH.
Dr. Ewan Klein, Centre for Cognitive Studies, University of Edinburgh
will give a seminar entitled - "Integrating syntax and semantics :
unification categorial grammar as a tool for a natural language
processing".
This talk will report on work carried out at the Centre for Cognitive
Science By Henk Zeevat, Jo Calder and Ewan Klein as part of an ESPRIT
project on natural language and graphics interfaces to a knowledge-base.
In recent years there has been a surge of interest in syntactic
parsers which exploit linguistically-motivated non-transformatinal
grammar formalisms: instances are the GPSG chart parser at
Hewlett-Packard, Palo Alto, and the PATR-II parser at SRI, Menlo Park.
By contrast, progress in the development of tractable, truth-conditional
semantic formalisms for parsing has lagged behind.
Unification categorial grammar (UCG) employs three resources which
significantly improve this situation. The first is Kamp's theory of
Discourse Representation: this is essentially a first-order calculus
which nevertheless provides a more elegant treatment of NL anaphora and
quantification than standard first-order logic.
Second, the grammar encodes both syntactic and semantic information in
the same data structures, namely directed acyclic graphs, and
manipulates them with same operation, namely unification. Third, the
fundamental grammar rule is that of categorial grammar, namely
functional application. Since the grammar objects contain both
syntactic and semantic information, any rule application will
simultaneously produce syntactic and semantic results.
UCG translates readily into a PATR-like declarative formalism, for
which Calder has written a Prolog implementation called PIMPLE.
------------------------------
Date: Tue, 28 Jan 86 15:41 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Feature Structures in Unification Grammars (UPenn)
LOGICAL SPECIFICATIONS FOR FEATURE
STRUCTURES IN UNIFICATION GRAMMARS
William C. Rounds and Robert Kasper, University of Michigan
3pm Tuesday, February 4, 1986
216 Moore, University of Pennsylvania
In this paper we show how to use a simple modal logic to give a complete
axiomatization of disjunctively specified feature or record structures commonly
used in unification-based grammar formalisms in computational linguistics. The
logic was originally developed as a logic to explain the semantics of
concurrency, so this is a radically different application. We prove a normal
form result based on the idea of Nerode equivalence from finite automata
theory, and we show that the satisfiability problem for our logical formulas is
NP-complete. This last result is a little surprising since our formulas do not
contain negation. Finally, we show how the unification problem for
term-rewriting systems can be expressed as the satisfiability problem for our
formulas.
------------------------------
End of AIList Digest
********************
∂30-Jan-86 0155 LAWS@SRI-AI.ARPA AIList Digest V4 #17
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 30 Jan 86 01:54:52 PST
Date: Wed 29 Jan 1986 20:54-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #17
To: AIList@SRI-AI
AIList Digest Thursday, 30 Jan 1986 Volume 4 : Issue 17
Today's Topics:
Queries - LISP-Based COBOL Parser or Compiler & AI Koans,
AI Tools - Common Lisp for RT PC,
Fiction - Pseudoscience Jargon in 2010,
Policy - Theoretical CS,
Games & Expert Systems - Hangman,
Reports - MRS Manual & ISSCO Working Papers
----------------------------------------------------------------------
Date: Tue 28 Jan 86 21:06:57-CST
From: John Hartman <CS.HARTMAN@R20.UTEXAS.EDU>
Subject: COBOL parser or compiler needed in Lisp environment
Does anyone know of a Cobol parser or compiler that is written in
LISP? (or PASCAL or will otherwise run on a LISP machine or DEC-20)
[This is not a joke!]
I'm working on a program understanding/program transformation
system. The target language at the moment is Cobol because
there are lots of unstructured Cobol programs and commercial systems
that attempt to restructure them automatically. AI program
understanding can improve the process. To demonstrate this, I need a
Cobol parser, and would rather find one than build one. Does anyone
have any pointers?
Thanks,
John Hartman
------------------------------
Date: Tue, 28 Jan 86 14:14:03 PST
From: "Douglas J. Trainor" <trainor@LOCUS.UCLA.EDU>
Subject: ai koans
Has anyone heard any good ai koans over the past three years???
[][] Douglas J. Trainor
[][] a pair of size 9 capri pants
------------------------------
Date: Sat, 25 Jan 1986 11:43 EST
From: "Scott E. Fahlman" <Fahlman@C.CS.CMU.EDU>
Subject: Common Lisp for RT PC
In response to Nick Gall's query about Common Lisp for the RT PC:
We at CMU have been working behind the scenes for some time to port our
Spice/Accent operating system from the now-defunct Perq machine to the
new IBM workstation, now dubbed the RT PC. As a part of that effort, we
have ported the Spice Lisp implementation of Common Lisp, including the
Hemlock editor. This port is mainly the work of Dave McDonald, with
assists from Rob Maclachlan and Skef Wholey. Lisp and Hemlock are now
running pretty well, with only a few finishing touches to be added and a
lot of tuning to be done. There are still some holes in the Accent
operating system for this machine, but we are working feverishly to
patch them up.
We are in the process of taking some benchmarks on the Lisp now. Early
indications show the speed of the pre-tuning RT PC Lisp to be roughly in
the ballpark (give or take a factor of two) of the Symbolics 3600 and
the Sun 3, though you have to be careful with declarations and
give up most of the runtime checking to go that fast. (Also necessary
on other stock hardware like Sun, but not on Symbolics.)
Please do not flood us with request for this system. The Lisp is not
particularly to port over to any flavor of Unix, and Accent is not yet
ready for use outside the friendly confines of CMU. At some point in
the future, we may make the whole package available WITHOUT ANY SUPPORT,
for users elsewhere who can tolerate unsupported university-quality
software, but before we do that we will have to think very hard about
how to minimize the hassles to all concerned. If we do that, I'll see
that people reading this list hear about it.
IBM has not announced any plans for introducing a supported Common Lisp
product on the RT PC's officially sanctioned unix-based operating
system. I believe that there would be great demand for such a product,
but what their plans are I can't say.
-- Scott
------------------------------
Date: Wed 22 Jan 86 18:57:57-PST
From: Bill Poser <POSER@SU-CSLI.ARPA>
Subject: Pseudoscience Jargon
What exactly is a "call-seeking" computer? Anything to do with a
"come-from" statement?
------------------------------
Date: Tue 28 Jan 86 07:26:53-PST
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: 2010 and H-Mobius Loops
I've checked my copy of 2010 and reviewed Dr. Chandra's explanation
of HAL's paranoia. The H-Mobious Loop phenomenon often occurs in
"autonomous GOAL-seeking programs". Not a bad lay-description of
a program that got confused as to what to do next?
--ted
------------------------------
Date: Mon, 27 Jan 86 12:52:16 CST
From: veach%ukans.csnet@CSNET-RELAY.ARPA
Subject: Comment on EATCS.
Concerning the posting of the "European Association for
Theoretical Computer Science" announcement in vol 4:6,
I would like to make the following comments:
1) I agree whole-heartedly with the editorial comment
which Ken Laws made at the end of vol 4:15 (except
for his acquiescence to the Stanford professor's request
that similar postings not be made in the future).
2) My reading of this Digest leads me to believe that
the contributers and the readers as a whole, span a
wide range of interests. This disparity of interests
has been with AI since its begining and indeed is what
makes AI what it is. We should recognize that with such
a variety of research in AI (from vision to mathematical
logic; design and fabrication of robotic limbs to analysis
of cognitive processes; etc...) there is and should be a
tremendous pool of resources which we individually draw
from and collectively share. One does not have to look far
to find common ground among researchers who delve into
such distinct subjects (graph theory, predicate calculus,
statistical analysis, etc.).
In conclusion, rather than restrict the flow of information, I hope
that as we see information which could benefit the community, we would
share it.
Glenn Veach (veach@ukans.csnet)
------------------------------
Date: 26 January 1986 1902-PST (Sunday)
From: west@nprdc.arpa (Larry West)
Subject: Theoretical CS vis-a`-vis AI
In AIList V4 #15, Jean-Francois Rit said:
``I don't feel that a major problem for AI researchers is understanding
the nature of computation, I think the AI point of view is much (maybe
too much) broader or at least OPEN toward The "real" universe.''
I agree that those who are doing Expert Systems or similar
kinds of programming need not worry too much about what a
computation is nor how it is achieved. But those in Cognitive
Science -- those interested in how brains do the things
they do so well -- might well be interested in formalisms
to help grasp the underlying processes of computation. On
the other hand, my prejudice is that these are not yet
understood in Theoretical Computer Science, either, and may
not even be of interest to those in the field (TCS).
Still, Parallel Distributed Processing or Connectionism seems
to hold much promise for lower-level information processing,
and perhaps higher-level as well, though that's harder to
see at this point. See, e.g., Hinton & Anderson's *Parallel
Models of Associative Memory* (Erlbaum, 1981), or Hinton's and
Feldman's articles in the April 1985 BYTE magazine, or Minsky
and Papert's *Perceptrons* or ... well, further references
supplied on demand.
My opinion would thus be not to exclude TCS out of hand,
but don't go out of your way (KIL) looking for articles/
messages/seminar announcements relevant to AIList, either.
Larry West (programmer) west@nprdc.ARPA
UCSD Institute for Cognitive Science
La Jolla, CA 92093
[That seems a fair summary of the feedback I've received, and
of the general AIList screening policy. -- KIL]
------------------------------
Date: Wed, 29 Jan 86 22:19:21 EST
From: Moorthy <moorthy%rpics.csnet@CSNET-RELAY.ARPA>
Subject: Hangman
We have developed a computer program to play hangman by itself. Here
the computer both guesses a word and tries to find what the guessed
word is. This program is a variation of hangman available under unix
4.2. The program to guess the words is partly rule based (these rules
are obtained by talking to an "expert") and partly searches the
dictionary judiciously. The programs are written in C and uses system
calls to AWK for searching various subsets of dictionary. We have
tested the program fairly exhaustively and it plays reasonably well.
If anyone is interested in knowing more about the program, you could
contact moorthy@rpics. The developers of this program are Patrick
Harubin, a junior in Computer Science at R.P.I and myself.
Krishnamoorthy
Department of Computer Science
R.P.I., Troy NY 12181.
------------------------------
Date: Tue 28 Jan 86 12:28:22-PST
From: Stuart Russell <RUSSELL@SUMEX-AIM.ARPA>
Subject: mrs manual
The Compleat Guide to MRS is now available as a Stanford CS report,
number STAN-CS-85-1080. To obtain a copy send mail to Kathy Berg
(BERG@SCORE.ARPA) or write to her at Comp Sci Dept, Stanford, CA 94305.
Stuart Russell (RUSSELL@SUMEX)
------------------------------
Date: Mon, 27 Jan 86 13:24:44 pst
From: Mike Rosner <rosner%cui.unige.chunet%ubc.csnet@CSNET-RELAY.ARPA>
Subject: ISSCO working papers
Fondazione Dalle Molle
Geneva
ISSCO
WORKING PAPERS
No. 46 (1981)
M Rosner
Three Strategic Goals in Conversational Openings
This paper tries to explain a short transcript of a
conversational opening as completely as possible within the
framework which takes conversational behaviour as defined by the
operation of a sohisticated planning mechanism. It is argued
that a critical role is played by the satifaction, for each
participant, of three strategic goals relating to attention,
identification, and greeting. Additional tactics for gaining
information are also described as necessary to account for this
transcript.
No. 47 (1983)
F di Primio & Th Christaller
A Poor Man's Flavor System
This paper is the result of an attempt to understand 'flavors',
the object oriented programming system in Lispmachine Lisp. The
authors argue that the basic principles of such systems are not
easily accessible to the programming public, because papers on
the subject rarely discuss concrete details. Accordingly, the
authors' approach is pedagogical, and takes the form of a
description of the evolution of their own flavor system. An
appendix contains programming examples that are sufficienly
detailed to enable an average Lisp programmer to build a flavor
system, and experiment with the essential concepts of
object-oriented programming.
No. 48 (1984)
Eric Wehrli
A Government-Binding Parser for French
This paper describes a parser for French based on an adaptation
of Chomsky's Government and Binding theory. Reflecting the
modular conception of GB-grammars, the parser consists of
several modules corresponding to some of the subtheories of the
grammar, such as X bar, binding, etc. Making an extensive use of
lexical information and following strategies which attempt to
take advantage of the basic properties of natural languages,
this parser is powerful enough to produce all of the grammatical
structures of sentences for a fairly substantial subset of
French. At the same time, it is restricted enough to avoid a
proliferation of alternative analyses, even with highly complex
constructions. Particular attention has been paid to the problem
of the grammatical interpretation of wh-phrases, to clitic
constructions, as well as to the organisation and management of
the lexicon.
No 49 (1985)
Patrick Shann
AI Approaches to Machine Translation
This paper examines some experimental AI systems that were
specifically developed for machine translation (Wilks'
Preference Semantics, the Yale projects, Salat and CONTRA). It
concentrates on the different types of meaning representation
used, and the nature of the knowledge used for the solution of
difficult problems in MT. To explore particular AI approaches,
the resolution of several types of ambiguity is discussed from
the point of view of different systems.
No. 50 (1985)
Beat Buchmann & Susan Warwick
Machine Translation: Pre-ALPAC history, Post-ALPAC overview
This paper gives a historical overview of the field of Machine
Translation (MT). The ALPAC report, the now well-known landmark
in the history of MT, serves to delimit the two sections of this
paper. The first section, Pre-ALPAC history, looks in some
detail at the hopeful beginnings, the first euphoric
developments, and the onsetting disillusionment in MT. The
second section, Post-ALPAC overview, describes more recent
developments on the basis of current prototype and commercial
systems. It also reviews some of the basic theoretical and
practical issues in the field.
No 51 (1985)
Rod Johnson & Mike Rosner
Software Engineering for Machine Translation
In this paper we discuss the desirable properties of a software
environment for MT development, starting from the position that
succesful MT depends on a coherent theory of translation. We
maintain that such an environment should not just provide for
the construction of instances of MT systems within some
preconceived (and probably weak) theoretical framework, but
should also offer tools for rapid implementation and evaluation
of a variety of experimental theories. A discussion of some
potentially interesting properties of theories of language and
translation is followed by a description of a prototype software
system which is designed to facilitate practical experimentation
with such theories.
Requests for these papers should be addressed to
ISSCO working papers
54 route des Acacias
1227 Geneva Switzerland
The price per paper, including air mail, is SFr 10 (or
equivalent). Cheques should be made payable to "Institut Dalle
Molle"
------------------------------
End of AIList Digest
********************
∂30-Jan-86 0336 LAWS@SRI-AI.ARPA AIList Digest V4 #18
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 30 Jan 86 03:36:30 PST
Date: Wed 29 Jan 1986 21:06-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #18
To: AIList@SRI-AI
AIList Digest Thursday, 30 Jan 1986 Volume 4 : Issue 18
Today's Topics:
Machine Learning - Self Knowledge & Perceptrons,
Theory - Definition of Symbol
----------------------------------------------------------------------
Date: Thu 23 Jan 86 04:46:01-PST
From: Bill Park <PARK@SRI-AI.ARPA>
Subject: Re: Speech Learning
This stuff about Sejnowski's speaking reminds me eerily of the parts
of Asimov's @i{I, Robot}, that tells how Susan Calvin's career began:
From "Robbie," where Susan is observing a little girl named Gloria
trying to get some help during a tour of the Museum of Science and
Industry ...
"The Talking Robot was a @i{tour de force}, a thoroughly
impractical device, possessing publicity value only. Once an
hour, an escorted group stood before it and asked questions
of the robot engineer in charge in careful whispers. Those
the engineer decided were suitable for the robot's circuits
were transmitted to the Talking Robot.
"It was rather dull. It may be nice to know that the square
of fourteen is one hundred ninety-six, that the temperature
at the moment is 72 degrees Fahrenheit, and the air-pressure
30.02 inches of mercury, that the atomic weight of sodium is
23, but one doesn't really need a robot for that. One
especially does not need an unwieldy, totaly immobile mass of
wires and coils spreading over twnty-five square yards." ...
... ``There was an oily whir of gears and a mechanically
timbred voice boomed out in words that lacked accent and
intonation, `I -- am -- the -- robot -- that -- talks.'
``Gloria stared at it ruefully. It did talk, but the
sound came from inside somewheres. There was no @i{face} to
talk to. She said, `Can you help me, Mr. Robot, sir?'
``The Talking Robot was designed to answer questions, and
only such questions as it could answer had ever been put to
it. It was quite confident of its ability, therefore. `I
-- can -- help -- you.'
``'Thank you, Mr. Robot, sir. Have you seen Robbie?'
``'Who -- is -- Robbie?'
```He's a robot, Mr. Robot, sir.'' She stretched to
tip-toes. ``He's about so high, Mr. Robot, sir, only higher,
and he's very nice. He's got a head, you know. I mean you
haven't, but he has, Mr. Robot sir.'
``The Talking Robot had been left behind, `A -- robot?'
```Yes, Mr. Robot, sir. A robot just like you, except he
can't talk, of course, and -- looks like a real person.'
```A -- robot -- like -- me?'
```Yes, Mr. Robot, sir.'
```To which the Talking Robot's only response was an erratic
splutter and an occasional incoherent sound. The radical
generalization offered it, i.e., its existence, not as a
particular object, but as a member of a general group, was
too much for it. Loyally, it tried to encompass the concept
and half a dozen coils burnt out. Little warning signals
were buzzing.'
``(The girl in her mid -teens left at that point. She had
enough for her Physics-1 paper on `Practical Aspects of
Robotics.' This paper was Susan Calvin's first of many on
the subject.)''
------------------------------
Date: 23-Jan-86 12:52:19-PST
From: jbn@FORD-WDL1
Subject: Perceptrons-historical note
Since Perceptron-type systems seem to be making a comeback, a
historical note may be useful.
The original Perceptron was developed in the early 1950s, and
was a weighted-learning type scheme using electromechanical storage, with
relay coils driving potentiometers through rachets being the basic
learning mechanism. The original machine used to be on display at the
Smithsonian's Museum of History and Technology, (now called the Museum of
American History); it was a sizable unit, about the size of a VAX 11/780.
But it is no longer on display; I've been checking with the Smithsonian.
It has been moved out to their storage facility in Prince George's County,
Maryland. It's not gone forever; the collection is rotated through the
museum. If there's sufficient interest, they may put it back on display
again.
Another unit in the same collection has relevance to this digest;
Parts of Reservisor, the first airline reservations system, built for American
Airlines around 1954, are still on display; they have a ticket agent's terminal
and the huge magnetic drum. Contrast this with Minksy's recent claims seen
here that airline reservation systems were invented by someone at the MIT AI
lab in the 1960s.
John Nagle
------------------------------
Date: 22 Jan 86 14:41:45 EST
From: Mark.Derthick@G.CS.CMU.EDU
Subject: Re: What is a Symbol?
This is a response to David Plaut's post (V4 #9) in which he maintains that
connectionist systems can exhibit intelligent behavior and don't use
symbols. He suggests that either he is wrong about one of these two points,
or that the Physical Symbol System Hypothesis is wrong, and seeks a good
definition of 'symbol.
First, taking the PSSH seriously as subject to empirical confirmation
requires that there be a precise definition of symbol. That is, symbol is
not an undefined primitive for Cognitive Science, as point is for geometry.
I claim no one has provided an adequate definition. Below is an admittedly
inadequate attempt, together with particular examples for which the
definition breaks down.
1) It seems that a symbol is foremost a formal entity. It is atomic, and owes
its meaning to formal relationships it bears to other symbols. Any internal
structure a [physical] symbol might posess is not relevant to its meaning.
The only structures a symbol processor processes are symbol structures.
2) The processing of symbols requires an interpreter. The link between the
physical symbols and their physical interrelationships on the one hand, and
their meaning on the other, is provided by the interpreter.
3) Typically, a symbol processor can store a symbol in many physically
distinct locations, and can make multiple copies of a symbol. For instance,
in a Lisp blocks world program, many symbols for blocks will have copies of
the symbol for table on their property lists. Many functionally identical
memory locations are being used to store the symbols, and each copy is
identical in the sense that it is physically the same bit pattern. I can't
pin down what about the ability to copy symbols arbitrarily is necessary,
but I think something important lurks here.
The alternative to symbolic representations, analog (or direct)
representations, do not lend themselves to copying so easily. For instance,
on a map, distance relations between cities are encoded as distances between
circles on paper. Many relations are represented, as in the case with the
blocks world, but you can't make a copy of the circle representing a city.
If it's not in the right place, it just won't represent that city.
4) Symbols are discrete. This point is where connectionist representations
seem to diverge most from prototypical symbols. For instance, in Dave
Touretzky's connectionist production system model (IJCAI 85), working memory
elements are represented by patterns of activity over units. A particular
element is judged to be present if a sufficiently large subset of the units
representing the pattern for that element are on. Although he uses this
thresholding technique to enable discrete answers to be given to the user,
what is going on inside the machine is a continuum. One can take the
pattern for (goal clear block1) and make a sequence of very fine grained
changes until it becomes the pattern for (goal held block2).
To show where my definition breaks down, consider numbers as represented in
Lisp. I don't think they are symbols, but I'm not sure. First, functions
such as ash and bit-test are highly representation dependent. Everybody
knows that computers use two's complement binary representation for
arithmetic. If they didn't, but used cons cells to build up numbers from
set theory for instance, it would take all day to compute 3 ** 5. Computers
really really have special purpose hardware to do arithmetic, and computer
programmers, at least sometimes, think in terms of ALU's, not number theory,
when they program. So the Lisp object 14 isn'sometimes t atomic, sometimes
its really 1110.
Its easy to see that the above argument is trying to expose numbers as
existing at a lower level than real Lisp symbols. At the digital logic
level, then, bits would be symbols, and the interpreter would be the adders
and gates that implement the semantics of arithmetic. Similarly, it may be
the case that connectionist system use symbols, but that they do not
correspond to, eg working memory elements, but to some lower level object.
So a definition of "symbol" must be relative to a point of view. With this
in mind, it seems that confirmation of the Physical Symbol System Hypothesis
turns on whether an intelligent agent must be a symbol processor, viewed
from the knowledge level. If knowledge level concepts are represented as
structured objects, and only indirectly as symbols at some lower level, I
would take it as disconfirmation of the hypothesis.
I welcome refinements to the above definition, and comments on whether Lisp
numbers are symbols, or whether ALU bits are symbols.
Mark Derthick
mad@g.cs.cmu.edu
------------------------------
Date: 27 January 1986 1532-PST (Monday)
From: hestenes@nprdc.arpa (Eric Hestenes)
Subject: Re: What is a symbol?
Article 125 of net.ai:
In article <724@k.cs.cmu.edu>, dcp@k.cs.cmu.edu (David Plaut) writes:
> It seems there are three ways out of this dilemma:
>
> (1) deny that connectionist systems are capable, in
> principle, of "true" general intelligent action;
> (2) reject the Physical Symbol System Hypothesis; or
> (3) refine our notion of a symbol to encompass the operation
> and behavior of connectionist systems.
>
> (1) seems difficult (but I suppose not impossible) to argue for, and since I
> don't think AI is quite ready to agree to (2), I'm hoping for help with (3)
> - Any suggestions? > David Plaut > (dcp@k.cs.cmu.edu)
Symbol is unfortunately an abused word in AI. Symbol can be used in several
senses, and when you mix them things seem illogical, even though they are not.
Sense 1: A symbol is a token used to represent some aspect or element
of the real world.
Sense 2: A symbol is a chunk of knowledge / human memory that is of a certain
character. ( e.g. predicates, with whole word or phrase size units )
While PDP / connectionist models may not appear to involve symbolic processes,
meaning mental processes that operate on whole chunks of knowledge that
consistute symbols they DO assign tokens as structures that represent some
aspect or element. For instance, if a vision program takes a set of
bits from a visual array as input, then at that point each of the bits are
assigned a symbol and then a computation is performed upon the symbol.
Given that pdp networks do have this primitive characterization in every
situation, they fit Newell's definition of a Physical Symbol System
[paraphrased as] "a broad class of systems capable of having and manipulating
symbols, yet realizable in the physical world." The key is to realize
that while the information that is assigned to a token can vary quite
significantly, as in connectionist versus high level symbolic systems,
the fact that a token has been assigned a value remains, and the manipulation
of that newly created symbol is carried out in either kind of system.
Many connectionists like to think of pdp systems as incorporating
"microfeatures" or "sub-symbolic" knowledge. However, by this they do not mean
that their microfeatures are not symbols themselves. Rather they are actively
comparing themselves against traditional AI models that often insist on using
a single token for a whole schema ( word, idea, concept, production ) rather
than for the underlying mental structures that might characterize a word.
A classical example is the ( now old ) natural language approach to thinking
that parses phrases into trees of symbols. Not even the natural language
people would contend that the contents of memory resembles that tree of
symbols in terms of storage. In this case the knowledge that is significant to
the program is encoded as a whole word. The connectionist might create a
system that parses the very same sentences, with the only difference being
how symbols are assigned and manipulated. In spite of their different
approach, the connectionist version is still a physical symbol system in the
sense of Newell.
This point would be moot if one could create a connectionist machine that
computed exactly the same function as the high-level machine, including
manipulating high level symbols as whole. While both languages are Turing
equivalent, one has yet to see a system that can compile a high-level
programming language with a connectionist network. The problems with creating
such a machine are many; however, it is entirely possible, if not probable.
See the paper for a Turing <--> Symbol System proof.
Reference: Newell, Allen. Physical Symbol Systems.
Cognitive Science 4, 135-183 (1980).
Copy me on replies.
Eric Hestenes
Institute for Cognitive Science, C-015
UC San Diego, La Jolla, CA 92093
arpanet: hestenes@nprdc.ARPA
other: ucbvax!sdcsvax!sdics!hestenes or hestenes@sdics.UUCP
------------------------------
End of AIList Digest
********************
∂03-Feb-86 1355 LAWS@SRI-AI.ARPA AIList Digest V4 #19
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 3 Feb 86 13:48:29 PST
Date: Mon 3 Feb 1986 10:13-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #19
To: AIList@SRI-AI
AIList Digest Monday, 3 Feb 1986 Volume 4 : Issue 19
Today's Topics:
Queries - Prolog for Compiler Writing & LISP Compilers &
LISP Tutorial Source Code & Mathematical Structure of OOPL &
Equation Solver,
Binding - Supercomputer Center & Grenoble Labs,
History - Airline Reservation Systems,
Report - Calculus of Partially-Ordered Type Structures,
Review - Technology Review Article
----------------------------------------------------------------------
Date: 31 Jan 86 08:02:00 EST
From: "INFO1::ELDER" <elder@info1.decnet>
Reply-to: "INFO1::ELDER" <elder@info1.decnet>
Subject: Prolog for Compiler Writing
Greg Elder
------------------------------
Date: Fri, 31 Jan 86 13:56:44 CST
From: Al Gaspar <gaspar@ALMSA-1.ARPA>
Subject: LISP Compilers?
A friend that doesn't have access to the net asked me to post this query.
What brands of Common LISP would run best on a VAX 780 under UNIX Sys V.2?
Any and all recommendations would be appreciated. Please reply to me
directly as I don't subscribe to AILIST. If there are enough replies,
I'll summarize to the net.
Thanks in advance--
Al Gaspar <gaspar@almsa-1.arpa>
USAMC ALMSA, ATTN: AMXAL-OW, Box 1578, St. Louis, MO 63188-1578
COMMERCIAL: (314) 263-5118 AUTOVON: 693-5118
seismo!gaspar@almsa-1.arpa
------------------------------
Date: 0 0 00:00:00 EST
From: "Don Mcdougall" <veda@paxrv-nes.ARPA>
Reply-to: "Don Mcdougall" <veda@paxrv-nes.ARPA>
Subject: request for LISP source code
[Interesting date on this message! -- KIL]
I am teaching an AI course for the continuing education program at
St. Mary's College in Southern Maryland. This is my first time teaching
LISP and I would appreciate access to the source code for "project-
sized" LISP programs or any other teaching aids or material. We are
using the 2nd edition of both Winston's AI and Winston&Horne's LISP.
I hate to ask for help, but we are pretty far from mainstream AI
down here and my students and I all have full time jobs so any help we
can get from the professional AI community would be greatly
appreciated by all of us.
Bob Woodruff
Veda@paxrv-nes.arpa
------------------------------
Date: Fri, 31 Jan 86 8:58:50 EST
From: "Srinivasan Krishnamurthy" <1438@NJIT-EIES.MAILNET>
Subject: Mathematical Structure of OOPL
I would like to hear about any definitive work on the mathematical
structure of object oriented programming languages (eg. smalltalk).
I am interested in the current status of the subject. Reference to
a good review will be most helpful. Would also appreciate receiving
papers or reports on the subject.
My netaddress is : srini%NJIT-EIES.MAILNET@MIT-MULTICS.ARPA
U.S Postal Address:
Srinivasan Krishnamurthy
COMSAT LABS, (NTD) RM:7142
22300 Comsat Drive
Clarksburg, MD-20871
Tel: (301)428-4531(W)
Thanks.
Srini.
------------------------------
Date: 29 Jan 86 16:10:06 GMT
From: ucdavis!lll-crg!topaz!harvard!cmcl2!philabs!dpb@ucbvax.berkeley.
edu (Paul Benjamin)
Subject: Re: Equation solver
> I am looking for a program that can solve simple algebraic expressions
> of the type:
>
> 10x - 15 = 5
>
> This system would have the capability of SIMPLIFYING expressions, EXPANDING
> expressions and SOLVING expressions (where possible).
> Note that I am looking for simple solutions, I have no need of the extensive
> capabilities of MACSYMA or some such thing.
> It needs to work on fairly small (pdp-11, non-unix) machines.
> It's purpose is to act a a simple but patient tutor in pre-algebra.
> Consequently it must give hints, advice, etc.
> Any help, pointers, suggestions, etc. from people is much appreciated.
>
> Dick Pierce
> ucdavis!lll-crg!seismo!harvard!talcott!panda!teddy!rdp@ucbvax.berkeley.edu
> Organization: GenRad, Inc., Concord, Mass.
You may want to look at Sleeman's work, although it is more along the
lines of simulating student's solutions to such tasks. It can be
found with related work in "Intelligent Tutoring Systems", published
by Academic Press in 1982. The editors are D. Sleeman and J. S. Brown.
Good luck.
Paul Benjamin
------------------------------
Date: 29 Jan 86 16:01:00 GMT
From: pur-ee!uiucdcs!uiucdcsb!mozetic@ucbvax.berkeley.edu
Subject: Re: Equation solver
Some work on algebraic manipulation was done at the Edinburgh Univ.
(Dept. of AI) by A.Bundy and others. I can give you few references:
Bundy, Silver: Preparing Equations for Change in Unknown,
IJCAI-81, and DAI research paper 159.
Bundy, Sterling: Meta-level Inference in Algebra, DAI 164.
Bundy, Welham: Using Meta-level Inference for Selective
Application of Multiple Rewrite Rules in Algebraic Manipulation,
Artificial Intelligence 16(2), 1981.
You may also consult the book:
Bundy: The Computer Modelling of Mathematical Reasoning,
Academic Press, 1983.
Good luck.
------------------------------
Date: Thu 30 Jan 86 15:26:58-CST
From: CMP.BARC@R20.UTEXAS.EDU
Subject: Re: Supercomputers and AI
Sorry about the transposition of the zip code for UCSD. Maybe I can
make up for it with the correct zip for GA Technologies. The mailing
address they seem to giving out for Supercomputer Center
communications is
GA Technologies
P.O. Box 85608
San Diego, CA 92138
Dallas Webster
CMP.BARC@R20.UTexas.Edu
ut-sally!batman!dallas
------------------------------
Date: 30 Jan 86 21:38:16 GMT
From: "mcvax!vmucnam!imag!lifia!rit"@SEISMO.ARPA
Subject: Grenoble labs
Someone mailed me for enquiries about computer science Grenoble labs in
response to an article in mod.ai. I lost his message, I'll answer him if he
remails.
Jean-Francois Rit
Laboratoire d'Informatique Fondamentale et d'Intelligence Artificielle
BP 68
38402 Saint-Martin d'Heres cedex
Disclaimer: This is only my postal address!
UUCP: ...{mcvax,vmucnam}!lifia!rit
decvax!genrad!panda!talcott!harvard!seismo!
mcvax!vmucnam!imag!lifia!rit@ucbvax.berkeley.edu
------------------------------
Date: Fri, 31 Jan 86 07:59:38 EST
From: Alan Bawden <ALAN@MC.LCS.MIT.EDU>
Subject: Contrast
Date: 23-Jan-86 12:52:19-PST
From: jbn at FORD-WDL1
... Contrast this with Minksy's recent claims seen here that airline
reservation systems were invented by someone at the MIT AI lab in the
1960s.
I decided to take a close look at this contrast. After searching through
the recent archives, the only mention by Minsky of airline reservation
systems that I can find is:
And I'm pretty sure that the first practical airline reservation was
designed by Danny Bobrow of the BBN AI group around 1966.!
Now that I have refreshed my memory with what he actually said, I think the
contrast is not quite as unflattering. Given the use of the adjective
``practical'', someone might even be able to make a case that he is right.
------------------------------
Date: Thu 30 Jan 86 15:15:53-CST
From: AI.HASSAN@MCC.ARPA
Subject: Calculus of Partially-Ordered Type Structures
This message is an commmon answer to all those individuals (thanks for
your interest) that have been asking me for copies of my Ph.D.
Dissertation (A Lattice-Theoretic Approach to Computation Based on a
Calculus of Partially-Ordered Type Structures).
My thesis is being revised for publication as a book. I am out of copies
the version I've been sending. You may:
.write or call U.of Penn. CIS dpt. 215-898-8540 (Ph.D. 9/84)
.write University Microfilms at Ann-Arbor, MI
.get hold of one from a friend and ask a nice secretary to xerox it
.steal one (no one will mind: it's a cheap value!).
.or you can wait and bear with my slow work in translating a
big Scribe mess into an even larger LaTeX mess(*)---send me
another message in, oh, about 3 months.
Hope that'll help.
Thanks for your patience.
Cheers,
Hassan
(*) By the way, any info of programs that do that is welcome!
------------------------------
Date: 31 Jan 86 17:18:00 GMT
From: decvax!cca!ada-uts!richw@ucbvax.berkeley.edu
Subject: Technology Review article
Has anyone read the article about AI in the February issue of
"Technology Review"? You can't miss it -- the cover says something
like: "In 25 years, AI has still not lived up to its promises and
there's no reason to think it ever will" (not a direct quote; I don't
have the copy with me). General comments?
-- Rich Wagner
"Relax! They're just programs..."
P.S. You might notice that about 10 pages into the issue, there's
an ad for some AI system. I bet the advertisers were real
pleased about the issue's contents...
------------------------------
Date: 3 Feb 86 14:25:24 GMT
From: vax135!miles@ucbvax.berkeley.edu (Miles Murdocca)
Subject: Re: Technology Review article
The [Technology Review] article was written by the Dreyfuss brothers,
who are famous for making bold statements that AI will never meet the
expectations of the people who fund AI research. They make the claim
that people do not learn to ride a bike by being told how to do it,
but by a trial and error method that isn't represented symbolically.
They use this argument and a few others such as the lack of a
representation for emotions to support their view that AI researchers
are wasting their sponsors' money by knowingly heading down dead-ends.
As I recall ["Machine Learning", Michalski et al, Ch 1], there are two
basic forms of learning: 'knowledge acquisition' and 'skill refinement'.
The Dreyfuss duo seems to be using a skill refinement problem to refute
the work going on in knowledge acquisition. The distinction between the
two types of learning was recognized by AI researchers years ago, and I
feel that the Dreyfuss two lack credibility since they fail to align their
arguments with the taxonomy of the field.
Miles Murdocca, 4G-538, AT&T Bell Laboratories, Crawfords Corner Rd,
Holmdel, NJ, 07733, (201) 949-2504, ...{ihnp4}!vax135!miles
------------------------------
End of AIList Digest
********************
∂07-Feb-86 1353 LAWS@SRI-AI.ARPA AIList Digest V4 #20
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 Feb 86 13:53:02 PST
Date: Fri 7 Feb 1986 10:29-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #20
To: AIList@SRI-AI
AIList Digest Friday, 7 Feb 1986 Volume 4 : Issue 20
Today's Topics:
Seminars - Logics of Programmes (Edinburgh) &
The Origins of Logic (UCB) &
A Fuzzy Inference Engine (UPenn) &
Intuitionistic Logic Programming Language (CMU) &
Minsky and Dreyfus on AI (USantaClara),
Conferences - Intelligent Robotic Systems &
Cognitive Science Society
----------------------------------------------------------------------
Date: Wed, 5 Feb 86 12:20:32 GMT
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@cs.ucl.ac.uk>
Subject: Seminar - Logics of Programmes (Edinburgh)
EDINBURGH AI SEMINARS
Date: 5th February 1986
Time: 2pm
Place: Department of Artificial Intelligence
Forrest Hill Seminar Room
Dr. D.C. McCarty, Center for Cognitive Sciences, University of Edinburgh,
will give a seminar entitled - `Logics of Programmes: Some Constructive
Comments'.
The talk will give an introduction to and overview of the applications of
constructive logic to programme verification. Three topics will be of
interest: the idea that functional interpretations of constructive set
theory are `high level' compilers; the relations between constructive
logic and Reynolds' `specification logic'; and the use of a constructive
meta theory in giving completeness proofs for hoare-style logics. We
will pre-suppose only a basic knowledge of mathematical logic; the
requisite technicalities from constructive logic and programme verification
will be explained in the talk.
------------------------------
Date: Wed, 5 Feb 86 15:40:45 PST
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Subject: Seminar - The Origins of Logic (UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237B
Tuesday, February 11, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``The Origins of Logic''
Jonas Langer
Department of Psychology, UCB
I will try to show that logical cognition (1) originates
during the first year of infancy and (2) begins to be represen-
tational during the second year of infancy. This includes pro-
posing some of its initial structural features. These claims
imply that (a) a symbolic language is not necessary for the
origins of logical cognition and (b) that ordinary language is
not necessary for its initial representational development.
Supporting data will be drawn from J. Langer, The Origins of
Logic: Six to Twelve Months, Academic Press, 1980, and The Ori-
gins of Logic: One to Two Years, Academic Press, 1986.
------------------------------
Date: Wed, 5 Feb 86 12:10 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - A Fuzzy Inference Engine (UPenn)
A VLSI IMPLEMENTATION OF FUZZY INFERENCE ENGINE:
TOWARD AN EXPERT SYSTEM ON A CHIP
Hiroyuki Watanabe, AT&T Bell Laboratories, Holmdel, New Jersey
3pm Tuesday, February 11, 1986
216 Moore, University of Pennsylvania
This talk describes a VLSI implementation of an inference mechanism to cope
with uncertainty and to perform approximate reasoning. Some details of VLSI
layout design is presented. Design of an inference mechanism is based on the
"max-min operation" of fuzzy set theory for an effective and real-time use.
This inference mechanism can handle imprecise and uncertain knowledge;
therefore, it can represent human expert knowledge and simulate his/her
reasoning processes. An inference mechanism has been realized by using custom
CMOS technology which emphasizes simplicity, extensibility and efficiency. For
example, all rules are executed in parallel for efficiency. Result of
preliminary tests indicates that the inference engine can perform approximately
80,000 Fuzzy Logical Inferences Per Second (FLIPS).
This chip is designed for the application of rule-based expert system paradigm
in real-time control. Potential application of such inference engine is
real-time decision-making in the area of command and control, intelligent
robotic system and chemical process control.
------------------------------
Date: 5 February 1986 1529-EST
From: Theona Stefanis@A.CS.CMU.EDU
Subject: Seminar - Intuitionistic Logic Programming Language (CMU)
JOINT LOGIC COLLOQUIUM (CMU, U of Pgh)
Dale Miller
CIS Department, University of Pennsylvania
Date: Thursday February 13
Time: 3 pm
Place: 4605 Wean Hall
A Logic Programming Language Based on Intuitionistic Higher-Order Logic.
Dale Miller
CIS Department, University of Pennsylvania
In this talk, we present a programming language whose operational
semantics can be understood as searching for proofs with in a subset of
intuitionistic higher-order logic. Kripke-models over a universe of
higher-order terms provide a model theoretic semantics for our
programs. Such models can be computed as least fix points. This logical
language is a natural extension to Horn clause logic and the
programming language based on it has many features not available in
simple Horn clause based programming languages. In particular, this
programming language can manipulate higher-order functions in a manner
similar to many functional programming languages. An interesting notion
of parametric modules is also available by virtue of the behavior of
implication within an intuitionistic logic. An interpreter for this
language must perform unification of higher-order terms. If time
permits, we illustrate how this feature makes possible the very clean
implementation of certain kinds of program transformation algorithms.
------------------------------
Date: Tue 4 Feb 86 14:45:07-PST
From: HOFFMANN@SRI-KL.ARPA
Subject: Seminars - Minsky and Dreyfus on AI (USantaClara)
Two talks on AI at Mayer Theater, University of Santa Clara;
both talks are free, first come, first served.
Marvin Minsky - "Intelligence and Creativity"
Monday, February 10th, 8:00 PM
Hubert Dreyfus - "Limits of AI"
Thursday, February 20th, 8:00 PM
For additional information call Mayer Theater, (408) 554-4015
------------------------------
Date: 31 Jan 1986 10:40:22 EST
From: Martin Marietta <MMDA@USC-ISI.ARPA>
Subject: Conference - Intelligent Robotic Systems
SPIE's Symposium on
Advances in Intelligent Robotics Systems, including
o Intelligent Robots and Computer Vision Conference
o Mobile Robots Conference
o Optics, Illumination, and Image Sensing for Machine Vision
o Space Station Automation
o Automated Inspection and Measurement
The Conference(s) take place October 26-31, 1986, at the Hyatt Regency in
Cambridge, MA. General Chairman is David Casasent, Carnegie-Mellon University.
Abstract due date: 15 April (200-300 word abstract)
Manuscript due date: 29 September
For author application or further information, contact
SPIE Technical Program Committee
PO Box 10
Bellingham, WA 98227-0010
(206) 676-3290
------------------------------
Date: Mon, 3 Feb 86 07:52:16 pst
From: gluck@SU-PSYCH (Mark Gluck)
Subject: Conference - Cognitive Science Society
8th Annual Cognitive Science Society Conference will be held
at U. Mass/Amherst from August 15th to 17th.
Submission Deadline: March 14, 1986
to: Charles Clifton
Department of Psychology
U. Mass.
Amherst, MA 01003
Include: author's name, address, and telephone number
up to four keywords
four copies of abstract (100-250 words)
four copies of paper (4K words for presentation; 2K for
poster)
------------------------------
End of AIList Digest
********************
∂07-Feb-86 1707 LAWS@SRI-AI.ARPA AIList Digest V4 #21
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 Feb 86 17:06:42 PST
Date: Fri 7 Feb 1986 11:19-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #21
To: AIList@SRI-AI
AIList Digest Friday, 7 Feb 1986 Volume 4 : Issue 21
Today's Topics:
Queries - ISIS & BIB-Format AI References,
Logic Programming - Prolog for Compiler Writing,
Expert Systems & Reports - MRS,
Theory - Dreyfus Article in Technology Review
----------------------------------------------------------------------
Date: Tue, 04 Feb 86 16:19:44 cet
From: WMORTENS%ESTEC.BITNET@WISCVM.WISC.EDU
Subject: Query -- ISIS
From: Uffe K. Mortensen ESA ( The European Space Agency )
Does anybody here know what 'ISIS' is ? I have been told it is a commercial
package for planning/scheduling problems, but I would like to have more
detailed information ( vendor, etc ).
-- Uffe.
------------------------------
Date: 3 Feb 86 18:59:00 GMT
From: pur-ee!uiucdcs!uiucdcsb!mklein@ucbvax.berkeley.edu
Subject: Bib Format AI References Request
I am interested in getting references in bib format for the following
topics, ordered with the stuff most important to me now on top:
* distributed problem solving
* machine learning
* planning
* vision
If you have any references available, please send them to:
mklein@uiucdcsb
Thanks!
Mark Klein
------------------------------
Date: 06 Feb 86 09:37:54 +1100 (Thu)
From: Isaac Balbin <munnari!mulga.oz!isaac@seismo.CSS.GOV>
Subject: Re: Prolog for Compiler Writing
I have not added compilers for prolog written in prolog, nor stuff on
compiling techniques for prolog.
%A H. Derby
%T Using Logic Programming for Compiling APL
%R Technical Report 84-5134
%I Department of Computer Science
%I California Institute of Technology
%C Los Angeles, California
%D 1984
%A G.A. Edgar
%T A Compiler Written in Prolog
%J Dr. Dobbs Journal
%D May, 1985
%A Harald Ganzinger
%A Michael Hanus
%T Modular Logic Programming of Compilers
%J Proceedings of the 2nd IEEE International Symposium on Logic Programming
%C Boston, USA
%D July, 1985
%A D.H.D. Warren
%T Logic for Compiler Writing
%J Software Practice and Experience
%V 10
%N 1
%P 97-125
%D 1980
%O Also available as DAI Research Paper 44
from Department of Artificial Intelligence, University of Edinburgh
Isaac Balbin
===========================
UUCP: {seismo,mcvax,ukc,ubc-vision}!munnari!isaac
ARPA: isaac%munnari.oz@seismo.css.gov
CSNET: isaac%munnari.oz@australia
------------------------------
Date: Tue, 4 Feb 86 15:46:28 EST
From: munnari!goanna.oz!wjb@seismo.CSS.GOV (Warwick Bolam)
Subject: Correction to correction to name of MRS
>From: veach%ukans.csnet@CSNET-RELAY.ARPA
>
>In a recent issue the full name MRS was incorrectly reported.
>
> MRS = "Modifiable Representation System"
>
> (source - "MRS Manual", Michael R. Genesereth, et. al.
> 1980, Stanford Heuristic Programming Project)
In the bibliography of the paper "Partial Programs", Michael R Genesereth,
1984, Stanford HPP:
M. R. Genesereth, R. Greiner, D. E. Smith: "MRS - A Meta-Level
Representation System", HPP-83-27, Stanford University HPP, 1983.
Is there anyone who REALLY knows what MRS stands for? I have a number of
MRS documents and NONE of them says "MRS stand for ..."
Warwick Bolam,
Computing Dept, Royal Melbourne Institute of Technology,
Melbourne, Victoria, Australia.
------------------------------
Date: Mon 3 Feb 86 17:15:26-PST
From: Stuart Russell <RUSSELL@SUMEX-AIM.ARPA>
Subject: MRS manual
I have been asked to point out to those who have requested copies of
the MRS manual, or who intend to do so, that a nominal fee of $6.00
(plus tax if in CA) is suggested. At .004 cents per exqusitely chosen word,
it's a bargain.
Stuart Russell (RUSSELL@SUMEX)
------------------------------
Date: 4 Feb 86 13:30:31 GMT
From: Bob Stine <stine@edn-vax.arpa>
Subject: re: Dreyfus article
"Why Computers May Never Think Like People," a recent
diatribe by the brothers Dreyfus, has several problems.
First and foremost is that AI research is implicitly
identified as the development of rule-based systems.
All of the well known limitations of rule-based systems
are inappropriately attributed to AI research as a whole.
There is a deeper problem with the article, that perhaps
springs from a misguided humanism. The article claims
that machines will never duplicate human performance in
cognitive tasks, because humans have "intuition." These
passages would read very much the same if 'magic' were
substituted for 'intuition' - "Human begins have a
magic intelligence that reasoning machines simply cannot
match." "... a boxer seems to recognize the moment to
begin an attack... ... the boxer is using his magic".
The Dreyfus brothers claim that they are not "Luddites,"
that they are not opposed to technology per se, but just
to wasting time and money on AI research. The basis of
their position is that some aspect of human intelligence
is inherently beyond human comprehension.
There certainly are things that humans will never know.
But no one thing is inherently unknowable.
------------------------------
Date: Tue, 4 Feb 86 09:58:53 CST
From: sandon@ai.wisc.edu (Pete Sandon)
Subject: Knowledge Aquisition -vs- Skill Refinement
This is not to defend the Dreyfus brothers, since I have yet to read
their books. On the other hand, I think they make a good point, though
with a bad example, in emphasizing learning as a process of refinement.
The example related in Miles Murdocca's submission is that of learning
to ride a bike through trial and error. The reason the example is a bad
one, is that it fits into the category of skill refinement as AI
researchers would use the term. This leads to the argument that Dreyfus
and Dreyfus are missing the critical distinction between knowledge
acquisition and skill refinement.
My feeling is that too much is made of this distinction. Had the
example been one of learning to distinguish fruits from vegetables,
or one of learning the symptoms of a class of diseases well enough
to diagnose them, this argument would not have arisen. Clearly these
involve knowledge acquisition rather than skill refinement. And yet, it
could be argued, and perhaps is argued by the Dreyfus's, that what
the AI researchers consider to be knowledge acquisition should be
just as much a refinement process guided by trial and error as learning
to ride a bike. Whereas AI considers concept formation to occur as the
acquisition of discrete chunks of knowledge, an alternative is to use
the gradual acquisition of evidence to support one concept definition
over another, in a manner similar to skill refinement.
Of course, if this criticism of AI is correct, AI has already
answered it. The use of connectionist models, and the corresponding
learning mechanisms currently being studied, provide just the sort
of cognitive models that support this refinement type of learning
through trial and error.
--Pete Sandon
------------------------------
Date: 3 Feb 86 17:24:42 GMT
From: nike!caip!im4u!milano!pcook@ucbvax.berkeley.edu
Subject: Re: Technology Review article
In article <7500002@ada-uts.UUCP>, richw@ada-uts.UUCP writes:
>
> Has anyone read the article about AI in the February issue of
> "Technology Review"? You can't miss it -- the cover says something
> like: "In 25 years, AI has still not lived up to its promises and
> there's no reason to think it ever will" (not a direct quote; I don't
> have the copy with me). General comments?
>
This article is a plug for a book and use of a current topic to get back at
the AI community for an imagined snub. Hubert Dreyfus was stood up by
John McCarthy of Stanford at a debate on a third echelon public tv
station in the bay area, and is still mad.
First, the premise: AI, expert systems, and knowledge-rule based systems
have been overly optimistic in their promises and stand short of delivered
results. Probably true, but many of the systems, once implemented, lose
their mystical qualities, and look a lot like other computer applications.
It's the ones that are in the buliding process which seem to present
extravagant claims.
As presented, however, the article is a shrill cry rather than a reasoned
response. It leans heavily on proof by intense assertion. As a pilot
I find examples which range from dubious to incorrect. As a scientist I
object to the gee whiz Reader's Digest tone. As a retired Air Force Officer
I object to the position that the commander's common sense is the ideal form
of combat decision making. And as a philosopher (albeit not expert) I object
to the muddy intellectual approach, rife with questionable presuppositions,
faulty dilemmas, and illogical conclusions.
I agree that the topic is worthy of discussion- our work to realize the
potential of computers must not degenerate into a fad which will fade
from the scene. But I object to a diatribe where advances in the field
are dismissed as trivial because current systems do not equal human
performance.
--
...Pete Peter G. Cook Lt. Colonel
pcook@mcc.arpa Liaison, Motorola, Inc. USAFR(Ret)
ut-sally!im4u!milano!pcook MCC-Software Technology Program
512-834-3348 9430 Research Blvd. Suite 200
Austin, Texas 78759
[There are, of course, two sides to the McCarthy incident. As I recall
from an old SU-BBoard message, McCarthy had agreed to an interview under
the impression that he would be on the program alone. At the last moment
it was mentioned that Dreyfus had also been invited. Viewing this as "ambush
journalism" -- my words -- McCarthy declined to participate in the impromptu
debate. No doubt the station was just trying to schedule a lively evening,
but they should have checked with McCarthy and given him time to prepare.
He and Dreyfus have sufficient visibility that a poorly stated remark, on
>>any<< radio station, could affect the future of AI funding. -- KIL]
------------------------------
Date: Thu, 6 Feb 86 08:46 EST
From: Ken Haase <KWH@MIT-AI.ARPA>
Subject: Re: Technology Review article
Date: 3 Feb 86 14:25:24 GMT
From: vax135!miles@ucbvax.berkeley.edu (Miles Murdocca)
Subject: Re: Technology Review article
To: AIList@SRI-AI
The [Technology Review] article was written by the Dreyfuss brothers,
who are famous for making bold statements that AI will never meet the
expectations of the people who fund AI research. They make the claim
that people do not learn to ride a bike by being told how to do it,
but by a trial and error method that isn't represented symbolically.
They use this argument and a few others such as the lack of a
representation for emotions to support their view that AI researchers
are wasting their sponsors' money by knowingly heading down dead-ends.
I don't think the Dreyfus brothers accuse AI researches of knowingly
heading down dead-ends. They just claim that most of ``what people do''
cannot be captured by the ``abstracted representations'' of nearly all
current AI research. I don't agree with this claim, but can't deny that
we (in AI) may be all wrong about our central hypothesis. We just have
to make our hypothesis clear and explicit. I think that most high level
intellectual processes have effective symbolic representations (and I'm
working to find out what such representations might be). That is an
explicit hypothesis of my research. On the other hand, I do not think
that there is anything like a symbolic representation of ``how to ride a
bike''. What happens in such cases is that our intellect ``trains'' the
animal that is the rest of us to ride the bicycle.
As I recall ["Machine Learning", Michalski et al, Ch 1], there are two
basic forms of learning: 'knowledge acquisition' and 'skill refinement'.
The Dreyfuss duo seems to be using a skill refinement problem to refute
the work going on in knowledge acquisition. The distinction between the
two types of learning was recognized by AI researchers years ago, and I
feel that the Dreyfuss two lack credibility since they fail to align their
arguments with the taxonomy of the field.
The alchemists could have made the same argument against arguments for
the periodic table; what the Dreyfus brothers are arguing for is the
need for just such a ``paradigm shift'' in cognitive science. The fact
that this shift will disrupt the foundations of most current AI
technology (most of which is not well proven anyway) should not effect
scientific judgements at all (though, pessimistically, it certainly
will).
In any case, the dichotomy between skill refinement and knowledge
acquisition is even suspect; outside of rote learning of facts, most
gained knowledge is gained by appropriating the knowledge as skills (in
a broad sense of skills, which includes responses, perceptual skills,
etc).
Ken
------------------------------
End of AIList Digest
********************
∂10-Feb-86 0059 LAWS@SRI-AI.ARPA AIList Digest V4 #22
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Feb 86 00:58:54 PST
Date: Sun 9 Feb 1986 22:57-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #22
To: AIList@SRI-AI
AIList Digest Monday, 10 Feb 1986 Volume 4 : Issue 22
Today's Topics:
Queries - AI Society Information &
J. of AI, Cognitive Science and Applied Epistemology &
Natural Language Interfacing & 3D-package for Xerox 1108 &
Psychological Knowledge Structures &
ICAI for Physically/Mentally Impaired,
Symbolic Math - PDP-11 Equation Solvers,
Logic Programming - Bibliography Correction & Quick Summary of NAIL,
AI Tools - MIRANDA Functional Programming System
----------------------------------------------------------------------
Date: Sun, 9 Feb 86 23:25:15 est
From: walker@mouton.ARPA (Don Walker at mouton.ARPA)
Subject: NEED INFORMATION ON AI SOCIETIES; PLEASE HELP
I am preparing a short article on associations, societies, and related
organizations in artificial intelligence. For each, I would appreciate
receiving the following kind of information: name; purpose; date of
establishment; principal people involved in getting it started;
important events in its history; publications, conferences, and other
activities; current membership (if relevant); and any other items of
special interest. I would like to put the set of organizations in some
historical perspective, if possible. Pointers to other places where
something like this has already been done would be particularly
helpful, and copies of same would be even more so. Needless to say, net
transmission is most efficient, as the deadline is uncomfortably
close. And I would particularly value finding someone who would be
interested in helping put all this information together!
I would expect to include SIGART, ACL, ICCL, AISB, IJCAII, AAAI, CSS,
CSCSI, ECCAI, and as many other national and regional groups as
possible. Please help if you can; share with me what you have
available, even if you think you may not be the most appropriate person
to do so; and help get this message out to the people who should know.
Net messages to walker@mouton.arpa, walker%mouton@csnet-relay,
or ucbvax(or ihnp4, etc.)!bellcore!walker; mail to
Don Walker (EAI)
Bell Communications Research
445 South Street, MRE 2A379
Morristown, NJ 07960, USA
I am sending this notice to publications as well as bboards, digests,
and people, but note that the time is too short to justify actually
printing it in most of them. Instead, the editors should respond
themselves or route it to those most likely to have the information.
------------------------------
Date: Thu 6 Feb 86 13:23:43-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: J. of AI, Cognitive Science and Applied Epistemology
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
I received a message a while ago about the introduction of a new journal
titled Journal for the Integrated Study of Artificial Intelligence, Cognitive
Science and Applied Epistemology from Ghent, Belgium. However I have not
been able to verify that such a journal has been published or is being
planned. Does anyone have more information about it?
Harry Llull
------------------------------
Date: Wed, 5 Feb 86 13:25 ???
From: Sonny Crockett <WELTYC%rpicie.csnet@CSNET-RELAY.ARPA>
Subject: Natural Language Interfacing
A couple of my students are interested in doing some work on
a natural language interface to an Operating System. I'm not really well
versed in this particular field. Can someone point me towards a few good
papers on this topic? They don't necessarily have to be specifically on
Natural language interfaces to OS, generic ones will do.
Thanks,
Christopher A. Welty
RPI/CIE Systems Manager
------------------------------
Date: 4 Feb 86 15:09:52 GMT
From: ucdavis!lll-crg!seismo!mcvax!diku!daimi!fleckner@ucbvax.berkeley
.edu (Kurt Fleckner)
Subject: 3D-package for Xerox 1108
I'm working on a Xerox 1108, and would like to get information
about a 3D-package for it.
I am designing an expert system to draw the 3D structure of
a RNA-molecule.
If anyone has any knowledge of such a system, I would be glad
if you could mail it to me. If you know about an expert system
in that area, I'm interested too.
Thanks,
Kurt Fleckner
Dept. of Comp. Science
University of Aarhus
Denmark
{seismo!mcvax!diku!daimi!fleckner}
[Check the last issue (or two) of IEEE Computer Graphics and Applications
for some beautiful graphics of DNA molecules in various conformations and
at several scales. I was enlightened by the sequence showing DNA twisting
to form a chromosome. Ken Knowleton and several others have also developed
molecular display software. (I've seen examples in the SIGGRAPH proceedings.)
It would be a pity if all this had to be reinvented. -- KIL]
------------------------------
Date: Mon, 3 Feb 86 11:16 EST
From: THOMPSON%umass-cs.csnet@CSNET-RELAY.ARPA
Subject: Cognitive Psychology - Knowledge Structures
I am looking for information about the knowledge structure
differences of people who have different levels of expertise
in a subject. For example, what is the difference in the
knowledge structure of an "apprentice", a "journeyman",or a
"master".
I will be happy to collect these references and repost them.
Please send them directly to me (via csnet).
Roger Thompson
Thompson@UMASS
------------------------------
Date: Sat, 8 Feb 86 21:49:29 est
From: Walter Maner <maner%bgsu.csnet@CSNET-RELAY.ARPA>
Subject: ICAI for Physically/mentally Impaired
Could anyone point me to recent research in the development of intelligent
tutoring/training systems for the physically/mentally impaired? My interest
is on the software engineering side, not the hardware side. What kinds of
unsolved problems exist which might be addressable by ICAI software methods?
My impression is that, while there is much activity on the hardware frontier
for impaired learners, there has been little innovative work on the software
side. So much for my impressions :-).
Please reply by mail directly to me. If there are enough responses, I
will post a response summary back to mod.ai. Thank you.
Walter Maner, Computer Science Department
BEST CSNet maner@bgsu
: ARPANet maner%bgsu@csnet-relay
: UUCP ...cbosgd!osu-eddie!bgsuvax!maner
: Mail BGSU, Bowling Green, OH 43403
: CompuServe 73157,247
WORST Phone (419) 372-8719 or -2337
------------------------------
Date: Mon, 3 Feb 1986 23:10 EST
From: Jonathan Cohn <JC595C%GWUVM.BITNET@WISCVM.WISC.EDU>
Subject: PDP-11 Equation Solvers
I believe that such work was being done at Stevens Institute of Tech.
in Hoboken NJ in 1982-3 on a Pro-350 (PC version of PDP-11) at the math
department you might want to try in get in touch with Larry Levine
there, he is in the math department, and I think lead that project.
He has a computer address on bitnet of LLEVINE@SITVXB.
Jonathan Cohn
JC595C@GWUVM.BITNET
COHN@NSFVAX.BITNET
COHN@NSFVAX.CSNET
------------------------------
Date: Fri, 7 Feb 86 16:28:28 PST
From: newton@vlsi.caltech.edu (Mike Newton)
Subject: small correction
A small correction to last digest's bibliography:
%A H. Derby
%T Using Logic Programming for Compiling APL
...
%C Los Angeles, California
to:
%A H. Derby
%T Using Logic Programming for Compiling APL
%R Technical Report 84-5134
%I Department of Computer Science
%I California Institute of Technology
%C Pasadena, California 91125
%D 1984
- mike
------------------------------
Date: Wed, 29 Jan 86 10:20:22 pst
From: Allen VanGelder <avg@diablo>
Subject: Quick summary of NAIL
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
NAIL is a research project one of whose goals is to determine
what degree of expressiveness and efficiency can be obtained
by a logic based language without resorting to certain
"undesirable" non-logical mechanisms such as cut, assert and
retract, rule order, and subgoal order. Jeff Ullman, the PI,
likes to draw the analogy:
"NAIL is to Prolog as Relational DBMS is to CODASYL."
NAIL is in a preliminary stage of development at Stanford CSD.
An overview, "Design overview of the Nail! System" is available
from Professor Ullman.
NAIL! is an acronym for "Not Another Implementation of Logic!"
------------------------------
Date: Fri, 31 Jan 86 23:39:35 GMT
From: dat%ukc.ac.uk@cs.ucl.ac.uk
Subject: MIRANDA Functional Programming System
MIRANDA
This is to inform anyone who may be interested that a UNIX
implementation of the Miranda functional programming system is now
available for the following machines: VAX (under 4.2 BSD), ORION, and
SUN workstations. It will be ported to a number of other UNIX machines
in the near future. The rest of this message contains a brief
description of the Miranda system, followed by information about how to
obtain it.
What is Miranda?
Miranda is an advanced functional programming language designed by David
Turner of the University of Kent. It is based on the earlier languages
SASL, KRC and ML. A program in Miranda is a set of equations describing
the functions and data structures which the user wishes to compute.
Programs written in Miranda are typically ten to twenty times shorter
than the equivalent programs in a conventional high level language such
as PASCAL. The main features of Miranda are:
1) Purely functional - no side effects
2) Higher order - functions can be treated as values
3) Infinite data structures can be described and used
4) Concise notation for sets and sequences ("zf expressions")
5) Polymorphic strong typing
The basic types of the language are numbers (integer and double
precision floating point), characters, booleans, lists, tuples, and
functions. In addition a rich variety of user-defined types may be
introduced by writing appropriate equations. A more detailed discussion
of the language may be found in "Miranda: a non-strict functional
language with polymorphic types", in Springer Lecture Notes in Computer
Science, vol 201.
The Miranda system is a self contained sub-system, running under UNIX.
The Miranda compiler works in conjunction with a screen editor (normally
this is `vi', but it is easy to arrange for this to be another editor if
preferred). Programs are automatically recompiled in response to source
edits and any syntax or type errors signalled immediately. The type
system enables a high proportion of semantic errors to be detected at
compile time. There is an online reference manual, which documents the
system at a level appropriate for someone already familiar with the main
ideas of functional programming (more tutorial material is in
preparation). Execution is by a fast interpreter, using an intermediate
code based on combinatory logic.
The Miranda system is a powerful tool, enabling complex applications to
be developed in a fraction of the time required in a conventional
programming system. Applications which have been developed in Miranda
include - compilers, theorem provers, and digital circuit simulation.
It is envisaged that the main uses of Miranda will be:
1) Teaching the concepts of functional programming
2) Rapid prototyping
3) As a specification language
4) For further research into functional programming
5) As a general purpose programming language
Release Information
The Miranda system has been developed by Research Software Ltd. It is
distributed in object code form and is currently available for the
following machines - VAX (under 4.2BSD), ORION, SUN 2, SUN 3.
The license fee, per cpu, is 300 pounds for an educational license and
975 pounds for a commercial license (US prices: $450, $1450,
respectively). If you think you may be interested in obtaining a copy
of the Miranda system please send your name and (postal) address to the
following electronic mail address, and you will be sent further
information and a copy of the license form etc:
USENET: ...!mcvax!ukc!mira-request
JANET: mira-request@ukc.ac.uk
ARPANET: mira-request%ukc@ucl-cs
Or telephone Research Software on: 0227 471844 (omit the initial `0' if
calling from outside England)
If you are interested in obtaining Miranda on a different machine, or a
different version of Unix, from those listed above, it is also worth
mailing details of your situation, since future porting policy will be
largely determined by perceived demand. ((NB - UNIX systems only,
please.))
David Turner
------------------------------
End of AIList Digest
********************
∂12-Feb-86 1615 LAWS@SRI-AI.ARPA AIList Digest V4 #23
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Feb 86 16:15:01 PST
Date: Wed 12 Feb 1986 09:31-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #23
To: AIList@SRI-AI
AIList Digest Wednesday, 12 Feb 1986 Volume 4 : Issue 23
Today's Topics:
Seminars - Systems of Actors (USC) &
Artificial Concept Formation (Edinburgh) &
Parallelism in Production Systems (SU) &
A Storage Manager for Prolog (SU) &
Statistical Theory of Evidence (SRI),
Conference - Compcon Spring 86
----------------------------------------------------------------------
Date: 7 Feb 1986 08:21-EST
From: gasser@usc-cse.usc.edu
Subject: Seminar - Systems of Actors (USC)
USC DISTRIBUTED PROBLEM SOLVING GROUP
MEETING
"Formalizing the Development of
Systems of Actors"
Ed Ipser
Ph.D Student, USC
A formalization of the process of specifying and developing
distributed systems is presented, with the emphasis on the description
of multiple robot environments. The general scheme is a recursive
reduction of behaviors with constraints to actors with pre-determined
behaviors by showing that the behaviors of the actors satisfy the
behavior and constraint requirements of the system. Possible
applications of this scheme are presented, including automatic
programming, planning, theorem proving, and the description of
non-computable functions. This work is based on the work of Goldman
and Wile on GIST, and Georgeff's work on the theory of processes.
Time: 3:00 PM Wednesday, Feb 12, 1986
Place: Seaver Science Bldg., Room 319, USC
Questions: Dr. Les Gasser, CS Dept., USC (213) 743-7794
------------------------------
Date: Mon, 10 Feb 86 14:56:48 GMT
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@cs.ucl.ac.uk>
Subject: Seminar - Artificial Concept Formation (Edinburgh)
EDINBURGH AI SEMINARS
Date: Wednesday, 12th February l986
Time: 2.00 p.m.
Place: Department of Artificial Intelligence
Seminar Room - F10
80 South Bridge
EDINBURGH.
Professor Donald Michie, The Turing Institute, Glasgow will give a
seminar entitled - "Artificial Concept Formation".
The approach develops from a position taken in the 1950's by H.A. Simon.
He proposed, in essence, a new criterion for the adequacy of a theory
(he considered economic theory), namely that in explaining the flux of
transactions a theory must take full account of the resource-limited
nature of the calculations performed by the participating agents. Is
economic man rational in the sense of making fully rational choices
whatever the computational cost (as in the von Neumann and Morgenstern
theory of economic behaviour), or does he exhibit at most the level of
rationality which human brains can feasibly compute in the time
available for each choice? By implication Simon also requires that
such a theory should be feasibly interpretable by its human user:
runnability on the machine is not enough.
This leads to the idea that what is run on the machine should be
human-oriented in a very strong sense, unprecedented in conventional
software technology even as an aspiration: if a program is to be not
just an operationally effective description or prescription, but a
machine representation of a concept and hence an eligible component of
a Simon-type theory, it must be not only human-intelligible but also
human-interpretable. This entails that the human expert skilled in
the given area must be able mentally to check it against trial data in
his head, just as he can in the case of his own professionally acquired
concepts.
------------------------------
Date: Mon 10 Feb 86 09:28:13-PST
From: Sharon Gerlach <CSL.GERLACH@SU-SIERRA.ARPA>
Subject: Seminar - Parallelism in Production Systems (SU)
On Friday, Feb 21, Anoop Gupta, a CSL faculty candidate from CMU, will
be speaking on "Parallelism in Production Systems" in MJH 352 at 3:15.
Parallelism in Production Systems
Anoop Gupta
Department of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15213
Production systems (or rule-based systems) are widely used in Artificial
Intelligence for modeling intelligent behavior and building expert systems.
Most production system programs, however, are extremely computation intensive
and run quite slowly. The slow speed of execution has prohibited the use of
production systems in domains requiring high performance and real-time
response. The talk will elaborate on the role of parallelism in the high-speed
execution of production systems.
On the surface, production system programs appear to be capable of using
large amounts of parallelism -- it is possible to perform match for each
production in a program in parallel. Our research shows that in practice,
however, the speed-up obtainable from parallelism is quite limited, around
10-fold as compared to initial expectations of 100-fold to 1000-fold. The main
reasons for the limited speed-up are: (1) there are only a small number of
productions that are affected (require significant processing) as a result of a
change to working memory and (2) there is a large variation in the processing
requirement of these productions. Since the number of affected productions is
not controlled by the implementor of the production system interpreter (it is
governed mainly by the author of the program and the nature of the problem),
the solution to the problem of limited speed-up is to somehow decrease the
variation in the processing cost of affected productions. We propose a
parallel version of the Rete algorithm which exploits parallelism at a very
fine grain to reduce this variation. We further suggest that to exploit the
fine-grained parallelism, a shared-memory multiprocessor with 32-64 high
performance processors should be used. For scheduling the fine-grained tasks
consisting of about 50-100 instructions, a hardware task scheduler is proposed.
The results presented in the talk are based on simulations done for a large
set of production systems exploiting different sources of parallelism. The
simulation results show that using the suggested multiprocessor architecture
(with individual processors performing at 2 MIPS), it is possible to obtain
execution speeds of 5000-27000 working memory element changes per second. This
corresponds to a speed-up of 5-fold to 27-fold over the best known sequential
implementation using a 2 MIPS processor. This performance is also higher than
that obtained by other proposed parallel implementations of production systems.
------------------------------
Date: Tue 11 Feb 86 16:30:05-PST
From: Karin Scholz <SCHOLZ@SU-SUSHI.ARPA>
Subject: Seminar - A Storage Manager for Prolog (SU)
this is a correction to the colloquium notice for this week:
Database Seminar CS 545, Friday Feb 14, 3:15pm, mjh352
Persistent Prolog: A Secondary Storage Manager for Prolog
Peter M D Gray
University of Aberdeen, Scotland
ABSTRACT OF TALK
The talk will describe a general purpose "tight coupling" system based on a
C-Prolog interpreter interfaced to a "Persistent Heap" database, which
can store a wide variety of data types and objects. We are
currently extending Prolog to allow definitions of modules and Abstract
Data Types. This provides a disciplined way of accessing frame structures,
bit maps, attached procedures and other non-Prolog objects.
With this system we are able to use Prolog to maintain an evolving
knowledge base on disc. Prolog clauses and data structures are
manipulated in memory in the usual way, but migrate to disc on a
"commit" step.
This work is part of the U.K. "Alvey" program in IKBS
------------------------------
Date: Wed 12 Feb 86 08:55:19-PST
From: FIRSCHEIN@SRI-AI.ARPA
Subject: Seminar - Statistical Theory of Evidence (SRI)
Bob Hummel will be giving a talk on Tuesday, Feb. 18 at 10:30,
Conf room EK242 (the "old" conf room). An abstract of his talk follows:
A Statistical Viewpoint on the Theory of Evidence
Robert Hummel
Courant Institute, New York University
Abstract
The Dempster/Shafer "Theory of Evidence" can be regarded as an alge-
braic space with a combination formula that combines the opinions of
"experts". This viewpoint, which is really the origin of the theory, will
be explained by introducing spaces with simple binary operations, giving
these spaces intuitive interpretations, relating them to Bayesian updating,
and showing that the spaces are (in a homomorphic sense) equivalent to the
Dempster/Shafer theory of evidence space.
The viewpoint allows us to remark on limitations of the theory. By
making compromises in a different manner, an alternative combination method
can be introduced. This representation of states of belief by "Parameter-
ized Statistics of Experts" will be described.
------------------------------
Date: Fri, 31 Jan 86 07:40:54 pst
From: Doug Coffland <coffland@lll-crg.ARPA>
Subject: Conference - Compcon Spring 86
Register for Compcon Spring 86 now and attend the year's
only broad based computing conference sponsored by the
IEEE Computer Society. Compcon will be held in San Fran-
cisco, March 3-6, 1986.
Key topics include: supercomputers, SDI software reliability,
AI applications, Japanese software practices, RISC vs. CISC,
and more. Four full day tutorials will be given on Monday,
March 3. Topics include silicon compilation, issues in expert
systems, complex computer graphics, and high performance com-
puting.
The advanced registration deadline is February 14. For further
information, contact Robert M. Long, Lawrence Livermore National
Labratory, P. O. Box 808, MS L130, Livermore, Ca. 94550.
The telephone number is 415-422-8934. Telephone registrations
will be accepted with Visa, MasterCard, or American Express.
------------------------------
End of AIList Digest
********************
∂12-Feb-86 2041 LAWS@SRI-AI.ARPA AIList Digest V4 #25
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Feb 86 20:41:01 PST
Date: Wed 12 Feb 1986 10:45-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #25
To: AIList@SRI-AI
AIList Digest Wednesday, 12 Feb 1986 Volume 4 : Issue 25
Today's Topics:
Journals - New Journal on Applied AI & CACM Invitation to Authors,
Conference - NCAI Exhibit Program,
Theory - Technology Review Article & Taxonomizing in AI
----------------------------------------------------------------------
Date: Sat, 25 Jan 86 14:21:41 est
From: FOXEA%VTVAX3.BITNET@WISCVM.WISC.EDU
Subject: New Journal on Applied AI
[Forwarded from the IRList Digest.]
New Journal: Applied Artificial Intelligence, An International Journal
Publication Information: published quarterly starting March 86
Rates: $55/volume indiv ($88 institutional) plus $24 air mail postage
Contacts: order with check or money order to -
Hemisphere Publishing Corporation, Journals Dept., 79 Madison Ave.
New York, New York 10016
Information: Elizabeth D'Costa, Circulation Mgr. (212) 725-1999
Aims and Scope: Applied Artificial Intelligence is intended to help
exchange information about advances and experiences in this field among
AI researchers. Furthermore, it will aid decision makers in industry and
management to understand the accomplishments and limitations of the
state-of-the-art of artificial intelligence.
Research to be presented will focus on methodology, time-schedules,
problems, work force strength, new tools, transfer of theoretical
accomplishements to application problems, information exchange among
concerned AI researchers and decision makers about the potential impact
of their work on their decisions.
------------------------------
Date: Mon 10 Feb 86 22:49:05-PST
From: Peter Friedland <FRIEDLAND@SUMEX-AIM.ARPA>
Subject: Invitation to Authors
I have recently been named to the Editorial Panel of Communications
of the ACM (CACM) with responsibility for artificial intelligence. CACM
is by far the widest-read computing publication with a current circulation
of over 75,000. I would like to encourage submissions to CACM in one of
several forms: articles of general interest (surveys, tutorials, reviews),
research contributions (original, previously-unpublished reports on
significant research), and reports on conferences or committee meetings.
In particular, manuscripts which act to bridge the gap between artificial
intelligence research and traditional computing methodologies are welcome.
All contributions will be fully reviewed with authors normally notified of
acceptance or rejection within 3 months of receipt.
In addition, CACM intends to devote substantial amounts of space
to special collections of related, high-quality, "Scientific American-like"
articles. For examples, see the September 1985 issue on "Architectures for
Knowledge-Based Systems" or the November 1985 issue on "Frontiers of
Computing in Science and Engineering." These special sections are usually
composed of invited papers selected by a guest editor from the community.
Professional editors at ACM headquarters devote on the order of man-weeks
per article in developing graphics and helping to make the articles readable
by a wide cross-section of the computing community. I welcome suggestions
(and volunteers) from anybody in the AI community for such special sections.
Articles and research contributions should be submitted directly
to: Janet Benton
Executive Editor, CACM
11 West 42nd St.
New York, NY 10036
Ideas for articles or special sections, and volunteers for helping
in the review process to insure the highest quality of AI publication
in CACM should be sent to me as FRIEDLAND@SUMEX (or call 415-497-3728).
Peter Friedland
------------------------------
Date: Mon 10 Feb 86 11:39:47-PST
From: AAAI <AAAI-OFFICE@SUMEX-AIM.ARPA>
Subject: Special Invitation
The AAAI would like to extend a special invitation to academic
institutions and non-profit research laboratories to participate
in this year's Exhibit Program at the National Conference on
Artificial Intelligence, August 11-15, 1986 in the Philadelphia
Civic Center. It's important to communicate what universities and
labortories are doing in AI by demonstrating their different
research projects to our conference attendees.
The AAAI will provide one 10' x 10' booth free of charge, describe
your demonstration in the Exhibit Guide, and assist you with your
logistical arrangements. Although we can not provide support
equipment (e.g. phone, lines or computers), we can direct you to
different vendors who may be able to assist you with your equipment
needs.
If you and your department are interesting in participating, please
call Ms. Lorraine Cooper at the AAAI (415) 328-3123.
------------------------------
Date: 3 Feb 86 19:46:53 GMT
From: ulysses!burl!clyde!watmath!utzoo!utcsri!utai!lamy@ucbvax.berkeley.edu
(Jean-Francois Lamy)
Subject: Re: Technology Review article
In article <7500002@ada-uts.UUCP> richw@ada-uts.UUCP writes:
>like: "In 25 years, AI has still not lived up to its promises and
>there's no reason to think it ever will"
Still thinking that fundamental breakthroughs in AI are achievable in such an
infinitesimal amount of time as 25 years is naive. I probably was not even
born when such claims could have been justified by sheer enthousiasm... Not
that we cannot get interesting and perhaps even useful developments in the
next 25 years.
>P.S. You might notice that about 10 pages into the issue, there's
> an ad for some AI system. I bet the advertisers were real
> pleased about the issue's contents...
Nowadays you don't ask for a grant or try to sell a product if the words "AI,
expert systems, knowledge engineering techniques, fifth generation and natural
language processing" are not included.
Advertisement is about creating hype, and it really works -- for a while,
until the next "in" thing comes around.
Jean-Francois Lamy
Department of Computer Science, University of Toronto,
Departement d'informatique et de recherche operationnelle, U. de Montreal.
CSNet: lamy@toronto.csnet
UUCP: {utzoo,ihnp4,decwrl,uw-beaver}!utcsri!utai!lamy
CDN: lamy@iro.udem.cdn (lamy%iro.udem.cdn@ubc.csnet)
------------------------------
Date: Fri, 7 Feb 86 20:51:58 PST
From: larry@Jpl-VLSI.ARPA
Subject: Sparklers from the Tech Review
I haven't read the Tech Review article; perhaps I shall just to see how
different will be my interpretation of it from the opinions heard here. The
discussion has made me want to offer some ideas of my own.
What we lump under AI is several different fields of research with often very
different if not contradictory approaches. As a dilletante in the AI field I
perceive the following:
COGNITIVE PSYCHOLOGY (a more restricted area than Cognitive Science) attempts
to understand biologically based thinking using behavioral and psychiatric
concepts and methods. This includes the effects emotional and social forces
exert on cognition. This group is increasingly borrowing from the following
groups.
COGNITIVE SCIENCE attempts to broaden the study to include machine-based
cognition. CS introduces heavy doses of metaphysics, logic, linguistics, and
information theory. My impression is that this area is too heavily invested
in symbol-processing research and could profitably spend more time on analog
computation and associative memories. These may better model humans' near-
instantaneous decision-making, which is more like doing a vector-sum than
doing massively parallel logical inferences.
PATTERN RECOGNITION, ROBOTICS, ETC. attempts to engineer cognition into
machines. Many workers in this field have a strong "hard-science" background
and a pragmatic approach; they often don't care whether they reproduce or
whether they mimic biological cognition.
EXPERT SYSTEMS, KNOWLEDGE ENGINEERING is more software engineering than
hardware engineering. Logic, computer science, and database theory are strong
here. Some of the simpler expert systems are imminently practical and have
been around for decades--though "programmed" into trouble-shooting books and
the like rather than a computer. (And while we're on this, most of what now
passes for rule-based programming could be done in BASIC or assembly language,
including self-modifying code, using fairly simple table-driven techniques.)
And perhaps several more groups could be distinguished. Of course, there are
plenty of exceptions to these categories, but humans do self-select into
groups and distill ideas and techniques into a rudimentary group persona.
If I were to characterize myself, I'd probably say that I'm less interested in
AI than IA--Intelligence Amplification. I'm interested by attempts to create
machine versions of human intelligence and I have little doubt that all the
vaunted "mystical" abilities of humans will eventually be reproduced,
including self-awareness.
Some of these abilities may be much easier to reproduce than we suppose:
intuition, for instance. I'm an artist in several media and use intuition
routinely. I've spent a lot of time introspecting about what happens when I
"solve" artistic problems, and I've learned how to "program" my undermind so
that I can promise solutions with considerable reliability. I believe I could
build an intuitive computer.
But what fascinates me is the idea of building systems which combine the best
capabilities of human and machine to overcome the limits of both. I think
it's much more economical, practical, and probably even humane to, say, make a
language-translation system that uses computers to do rapid, rough transla-
tions of 99% of a text and uses human sensitivities and skills to polish and
validate the translations. (Stated like that it sounds like two batch jobs
with a pipe between them. My concept is an interactive system with both human
and computer collaborating on the job, with the human doing continuous shaping
and scheduling of the entire process.)
Now I'll go back to being an interested by-stander for another six months!
Larry @ JPL-VLSI.arpa
------------------------------
Date: 3 Feb 86 18:04:58 GMT
From: amdcad!lll-crg!seismo!rochester!lab@ucbvax.berkeley.edu (Lab Manager)
Subject: Re: Technology Review article
In article <7500002@ada-uts.UUCP> richw@ada-uts.UUCP writes:
>
>Has anyone read the article about AI in the February issue of
>"Technology Review"? You can't miss it -- the cover says something
>like: "In 25 years, AI has still not lived up to its promises and
>there's no reason to think it ever will" (not a direct quote; I don't
>have the copy with me). General comments?
They basically say that things like blocks world doesn't scale up, and
AI can't model intuition because 'real people' aren't thinking
machines. An appropriate rebuttal to these two self-styled
philosophers:
"In 3000 years, Philosophy has still not lived up to its promises and
there's no reason to think it ever will."
Brad Miller Arpa: lab@rochester.arpa UUCP: rochester!lab
(also miller@rochester for non-lab stuff)
Title: CS Lab Manager
Snail: University of Rochester Computer Science Dept.
617 Hylan Building Rochester NY 14627
------------------------------
Date: Sun, 9 Feb 86 16:38:38 est
From: "Marek W. Lugowski" <marek%indiana.csnet@CSNET-RELAY.ARPA>
Subject: Taxonomizing in AI: neither useful or harmless
> [Stan Shebs:] In article <3600036@iuvax.UUCP> marek@iuvax.UUCP writes:
>
> Date: 4 Feb 86 19:55:00 GMT
> From: ihnp4!inuxc!iubugs!iuvax!marek@ucbvax.berkeley.edu
>
> ha ha ha! "taxonomy of the field" -- the latest gospel of AI? Let me be
> impudent enough to claim one of the most misguided AI efforts to date is
> taxonomizing a la Michalski et al: setting up categories along arbitrary
> lines dictated by somebody or other's intuition. If AI does not have
> the mechanism-cum-explanation to describe a phenomenon, what right does it
> have to a) taxonomize it and b) demand that its taxonomizing be recognized
> as an achievement?
> -- Marek Lugowski
>
> I assume you have something wonderful that we haven't heard about?
I assume that you are intentionally jesting, equating that which I criticize
with all that AI has to offer. Taxonomizing is a debatable art of empirical
science, usually justified when a scientist finds itself overwhelmed with
gobs and gobs of identifiable specimens, e.g. entymology. But AI is not
overwhelmed by gobs and gobs of tangible singulars; it is a constructive
endeavor that puts up putatative mechanisms to be replaced by others. The
kinds of learning Michalski so effortlessly plucks out of the thin air are not
as incontrovertibly real and graspable as instances of dead bugs.
One could argue, I suppose, that taxonomizing in absence of multitudes of
real specimens is a harmless way of pursuing tenure, but I argue in
Indiana U. Computer Science Technical Report No. 176, "Why Artificial
Intelligence is Necessarily Ad Hoc: Your Thinking/Approach/Model/Solution
Rides on Your Metaphors", that it causes grave harm to the field. E-mail
nlg@iuvax.uucp for a copy, or write to Nancy Garrett at Computer Science
Department, Lindley Hall 101, Indiana University, Bloomington, Indiana
47406.
> Or do you believe that because there are unsolved problems in physics,
> chemists and biologists have no right to study objects whose behavior is
> ultimately described in terms of physics?
>
> stan shebs
> (shebs@utah-orion)
TR #176 also happens to touch on the issue of how ill-formed Stan Shebs's
rhetorical question is and how this sort of analogizing has gotten AI into
its current (sad) shape.
Please consider whether taxonomizing kinds of learning from the AI perspective
in 1981 is at all analogous to chemists' and biologists' "right to study the
objects whose behavior is ultimately described in terms of physics." If so,
when is the last time you saw a biology/chemistry text titled "Cellular
Resonance" in which 3 authors offered an exhaustive table of carcinogenic
vibrations, offered as a collection of current papers in oncology?...
More constructively, I am in the process of developing an abstract machine.
I think that developing abstract machines is more in the line of my work as
an AI worker than postulating arbitrary taxonomies where there's neither need
for them nor raw material.
-- Marek Lugowski
an AI graduate student
Indiana U. CS
------------------------------
End of AIList Digest
********************
∂12-Feb-86 2316 LAWS@SRI-AI.ARPA AIList Digest V4 #24
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Feb 86 23:16:08 PST
Date: Wed 12 Feb 1986 09:47-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #24
To: AIList@SRI-AI
AIList Digest Wednesday, 12 Feb 1986 Volume 4 : Issue 24
Today's Topics:
Queries - Literature Search & Distributed Databases,
AI Tools - LISP Source Code,
Applications - ISIS,
Journals - Belgian AI/CogSci/Epistemology Journal,
Re: Cognitive Psychology - Knowledge Structures
Humor - Animated Computer Personalities & Paranoid Computers & Koans
----------------------------------------------------------------------
Date: Mon, 10 Feb 86 14:52:34 CST
From: veach%ukans.csnet@CSNET-RELAY.ARPA
Subject: Literature search.
I am beginning a research project on the control of multiple expert
systems in a single package/environment. If anyone has any bibliographies
and/or references to literature on the control/scheduling/implementation
of multiple expert systems and would kindly share it with me I would
appreciate it. Thanks
Glenn O. Veach
Artificial Intelligence Laboratory
Department of Computer Science
University of Kansas
Lawrence, KS 66044
(913) 864-4482
------------------------------
Date: 6 Feb 86 15:13:34 GMT
From: ulysses!mhuxr!mhuxt!houxm!mtuxo!drutx!druky!krahl@ucbvax.berkeley.edu
(R.H. Krahl)
Subject: Distributed Databases
Anyone having any articles or information in regards to distributed databases
with expert systems would be very much appreciated. Thanks-in-advance.
Rich Krahl @ AT&T-ISL, Denver EMAIL: {allegra, cbosgd, ihnp4}!druky!krahl
11900 N. Pecos
Denver, CO. 80234.
------------------------------
Date: Sat, 8 Feb 86 14:07:34 pst
From: sdcsvax!sdcrdcf!hplabs!oblio!paf@ucbvax.berkeley.edu (Paul Fronberg)
Subject: Re: request for LISP source code
Try Scheme from the GNU emacs distribution. This is the version of LISP
utilized in "Structure and Interpretation of Computer Programs". The
source is ~ $150 and includes GNU emacs + Scheme + Bison (as of 7/85).
There was no problem in getting Scheme to build on either BSD 4.2 or USG V.2
(slight modification of build files necessary in the last case).
------------------------------
Date: Mon 10 Feb 86 15:52:39-CST
From: CMP.BARC@R20.UTEXAS.EDU
Subject: Re: ISIS
ISIS is a factory scheduling KBS developed by Mark Fox and Stephen Smith
at the Intelligent Systems Laboratory of the Robotics Institute at CMU,
in conjunction with Westinghouse. It constructs job-shop schedules,
monitors performance and avoids production bottlenecks, by evaluating and
resolving conflicting factors such as productivity goals, resource
requirements and machine preferences.
References:
Fox and Smith, "ISIS -- a KBS for factory scheduling", Expert Systems, v. 1,
n. 1, July 1984, pp. 25-49.
Fox, Smith, et al, "ISIS: A Constraint-Directed Reasoning Approach to Job
Shop Scheduling", Proc IEEE Conf. on Trends and Applications 83, Gaithers-
berg, MD, May 1983.
Dallas Webster
Burroughs Austin Research Center
CMP.BARC@R20.UTexas.Edu
{ihnp4, seismo, ctvax}!ut-sally!batman!dallas
------------------------------
Date: 10 FEB 86 17:04-N
From: KEMPEN%HNYKUN52.BITNET@WISCVM.WISC.EDU
Subject: Info on Belgian AI journal (AIList)
The journal is called:
Title: CC-AI
Subtitle: The journal for the integrated study of Artificial
Intelligence, Cognitive Science and Applied Epistemology.
Editorial Address:
CC-AI
Blandijnberg 2
B-9000 Ghent, Belgium
tel. +32 (91) 257571, ext. 4522
TELEX RUGENT 12.754
Publisher:
Communication & Cognition
(Same address)
Gerard Kempen
------------------------------
Date: Mon, 10 Feb 86 18:23:59 pst
From: sdcsvax!sdcrdcf!ucla-cs!koen@ucbvax.berkeley.edu (Koenraad Lecot)
Subject: Re: J. of AI, Cognitive Science and Applied Epistemology
The journal had a couple of issues last year. Papers cover a wide variety
of topics within AI. Not too technical stuff. Have not received any issues
this year yet.
-- Koenraad Lecot
------------------------------
Date: Tue, 11 Feb 86 09:36:10 cst
From: bulko@SALLY.UTEXAS.EDU (Bill Bulko)
Reply-to: bulko@sally.UUCP (Bill Bulko)
Subject: Re: Cognitive Psychology - Knowledge Structures
My attempted mail reply to thompson@umass-cs.csnet failed, so I'm
posting this instead. The request was for pointers to articles dealing
with how varying levels of expertise could be represented. My research
is related to problem solving in physics, and so I have read several papers
dealing with the way people learn how to solve problems in technical fields.
Below is an excerpt from my proposal containing the related (annotated)
references; I hope that they prove helpful.
Bhaskar, R., and H. A. Simon, "Problem Solving in Semantically Rich
Domains: An Example from Engineering Thermodynamics." Cognitive Science,
Vol. 1, No. 2, April 1977.
This is a study of the processes used by people to solve problems in
semantically rich domains, and how these processes compare with those in
general problem-solving domains. The authors choose the field of
thermodynamics, and use a protocol-encoding program called SAPA, which they
theorize corresponds to their subject's problem-solving behavior.
Chi, M. T. H., P. Feltovich, and R. Glaser, "Categorization and
Representation of Physics Problems by Experts and Novices." Cognitive
Science, Vol. 5, No. 2, April-June 1981.
The authors compare the ways experts and novices categorize physics problems
and form physical models of the problems based on the categories created.
Studies are presented which investigate the implications of the differences
found for problem solving in general.
Larkin, J., J. McDermott, D. Simon, and H. A. Simon, "Models of Competence in
Solving Physics Problems." Cognitive Science, Vol. 4, No. 4, October-
December 1980.
This article discusses how a person's experience and expertise in solving
physics problems determine the process by which he solves them. The authors
describe a set of two computer programs which they claim are accurate models
of "expert" and "novice" problem-solving protocols.
Larkin, J., and H. A. Simon, "Learning Through Growth of Skill in
Mental Modeling." Proceedings of the Third Annual Conference of
the Cognitive Science Society, p. 106.
The authors study how people develop the ability to take physical situations
and re-represent them in terms of scientific entities. They present a program
called ABLE, which models the performance of human experts and novices as they
solve physics problems, from this learning point of view.
Luger, G., "Mathematical Model Building in the Solution of Mechanics
Problems: Human Protocols and the MECHO Trace." Cognitive Science,
Vol. 5, No. 1, January-March 1981.
Luger describes an automatic problem solver, MECHO, and describes how it
can be used for model building and manipulation in solving problems in
physics. He compares traces of MECHO with the problem-solving protocols of
several human subjects, and hypothesizes that these traces are similar to the
model-building techniques that people in general use.
Hope these help,
Bill
"In the knowledge lies the power." -- Edward A. Feigenbaum
"Knowledge is good." -- Emil Faber
Bill Bulko Department of Computer Sciences
The University of Texas {ihnp4,harvard,gatech,ctvax,seismo}!sally!bulko
------------------------------
Date: 27 Jan 86 16:33:00 GMT
From: pur-ee!uiucdcs!uicsl!pollack@ucbvax.berkeley.edu
Subject: Re: Two AI software packages
RE: Mom
There was an article by Thomas Friedman in the NYT a couple
of months ago on two programs for the Atari ST written
by "the Israeli equivalent of Garry Trudeau":
"MOM" and "MURRAY" are animated computer personalities,
They sit in comfortable chairs on the screen and talk to you.
Murray is a raconteur, with supposedly an ever-expanding database
of humor, and a memory for the jokes he already told you, and MOM
is a typical mother figure, who can make you feel guilty for
anything, even spending the $49 to buy her. Their dialog appears in
white bubbles above their heads, and the user gets
to type in their name and answer yes/no questions.
------------------------------
Date: Fri, 31 Jan 86 14:16:32 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: noaK
Re: 2010 and H-Mobius Loops and HAL's paranoia (Vol 4 # 17).
Why not give HAL (an intelligent system) the Rorschach inkblot test,
"to show intelligence, personality and mental state"?
Another psychological test, the IQ test, was proposed by in volume 3,
number 164.
Gordon Joly
aka
The Joka
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: {...!seismo!mcvax}!ukc!kcl-cs!qmc-ori!gcj
------------------------------
Date: Mon, 10 Feb 86 14:08:31 EST
From: decwrl!decvax!sunybcs!colonel@ucbvax.berkeley.edu (Col. G. L. Sicherman)
Subject: Re: ai koans
A P.I. who was trying to meet a deadline said to his
assistant: "Excuse me, I couldn't help noticing that
you're not working!"
"The computer isn't working," the assistant replied.
PASK, overhearing them, commented: "Not the assistant,
not the computer. The man-machine interface isn't
working."
------------------------------
End of AIList Digest
********************
∂14-Feb-86 0024 LAWS@SRI-AI.ARPA AIList Digest V4 #26
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 Feb 86 00:24:04 PST
Date: Thu 13 Feb 1986 21:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #26
To: AIList@SRI-AI
AIList Digest Friday, 14 Feb 1986 Volume 4 : Issue 26
Today's Topics:
Queries - Automatic Testing of Parsers & Baseball Expert Systems,
Literature - AI in Engineering & Business Week on Expert Systems,
AI Tools - LISP Compilers,
Education - ICAI for the Physically/Mentally Impaired,
Games - Artificial Animals & Software Robots
----------------------------------------------------------------------
Date: 13 Feb 86 10:52:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: automatic testing of parsers
Do any systems exist which can accept a body of BNF (or some other
syntactic production rules), and then generate or enumerate test
cases to be run against an alleged parser of that BNF?
Thanks in advance for any help...
John Cugini <Cugini@NBS-VMS>
National Bureau of Standards
------------------------------
Date: 8 Feb 86 00:31:08 GMT
From: sdcsvax!noscvax!priebe@ucbvax.berkeley.edu (Carey E. Priebe)
Subject: baseball expert systems
****************************************************************
i need pointers to or information about expert systems that have
been developed for the baseball domain. i would be interested
in research or incomplete programs as well as mature systems. i
believe there was some related work ongoing at yale recently, per-
haps focusing on natural language, but my information is sketchy.
reply directly to me or through the net.
thanx in advance.
cp
*****************************************************************
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: AI in Engineering
As a news editor for "Artificial Intelligence in Engineering", I
request that people send me information on new applications of
artificial intelligence to engineering problems, whether they be
products, research efforts, industrial applications or related items
such as conferences or new bindings.
Please send the information to me at:
Laurence L. Leff
Computer Science and Engineering
Southern Methodist University
Dallas, Texas 75275
bitnet: E1AR0002 at SMUVM1
Arpanet, CSNET leff%smu@csnet-relay
UUCPnet ihnp4!convex!smu!leff
------------------------------
Date: Thu 13 Feb 86 10:58:34-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Business Week on Expert Systems
Check the February 10 issue of Business Week, pp. 94, 98-99, for a
discussion of the funding and prospects of Intellicorp, Teknowledge,
Inference Corp., and the Carnegie Group. They are described as the
Gang of Four in AI.
-- Ken Laws
------------------------------
Date: Wed, 12 Feb 86 09:00:25 est
From: sdcsvax!dcdwest!ittatc!decvax!linus!raybed2!gxm@ucbvax.berkeley.edu
(GERARD MAYER)
Subject: Re: LISP Compilers?
Get in touch with Franz Inc., 2920 Domingo Ave, Suite 203, Berkeley, CA 94705
(415) 540-1224 for common lisp product running on unix.
Gerard Mayer
Raytheon Research Division
uucp ..linus!raybed2!gxm
------------------------------
Date: Wed, 12 Feb 86 17:59:06 mst
From: ulysses!ihnp4!alberta!arms@ucbvax.berkeley.edu (Bill Armstrong)
Subject: Re: ICAI for Physically/mentally Impaired
There is a softcover book: Microcomputer Resource Book for Special
Education by Dolores Hagen published by Reston in 1984. It deals
with questions of the learning impaired, deaf, blind, and physically
handicapped, but points out that a lot of software is useful
to the handicapped even if it isn't so labelled.
The ISBN numbers are 0-8359-4345-3 and 0-8359-4344-5 (paperback)
Call number LC4019.H33 1984.
I don't know whether it satisfies the ICAI criterion or is just
CAI. The person to talk to about ICAI would be
Greg Kearsley, Courseware, Inc.,
10075 Carroll Canyon Road, San Diego, California 92131.
I hope this helps you.
------------------------------
Date: Thu 23 Jan 86 10:44:55-PST
From: Mark Richer <RICHER@SUMEX-AIM.ARPA>
Subject: Artificial Animals
[Excerpted from the AI-Ed distribution.]
Computer Currents, 22-oct-85 [a computer newspaper]
Strehlo: What's the nature of the research?
Kay: It's yet another attempt to try and understand the thin
edge of the long wedge. At PARC, the children used Smalltalk on the
interim Dynabook to build their own application programs, their own
editors and animation and stuff like that. In this case, we're sort
of upping the ante to try and do a system in which the children can
create little mentalities, animal level mentalities that can be put
into a simulated environment where they have to survive. If you will,
it's like creating a little Disney character that you then put out
into a big world."
Strehlo: We see this kind of thing on a simple level in adventure
games where the player has to give characters the traits needed to
achieve some goal.
Kay: Right, exactly.
Strehlo: And this just goes further? How would it go further?
Kay: "It goes a lot further. We're shooting for something that will
be dynamically animated and will actually learn things. The idea is
to get kids to be more thoughtful about thinking by getting them to
try to think about how animals think, and by taking the results of
these comtemplations and actually building animal-like creatures that
work. It's exciting. There's very little in existing AI or computer
graphics that really serves this project, which is nice. We get to
invent it." [AI-ED editor: If you are familiar with Doug Lenat's
work, you might not be surprised to learn that Doug and Alan are
friends. When Alan was at Atari, Doug consulted on the KNOESPHERE
project along with ALan Borning, David McDonald, Craig Taylor &
Stephen Weyer ... in alphabetical order. See IJCAI proceedings #8,
p.167-169 if you are interested .. it's a bit vague and far out though]
Strehlo: Who do you have working with you on this project?
Kay: I've got Marvin Minsky helping on the AI stuff, I've got Seymour
Papert helping on some of the curriculum design, I've got the visual
language lab at MIT helping on the graphics for the animals and stuff.
All different kinds of disciplines, different kinds of students, are
working on it. If we can anchor the place over the next couple of
years, and there's every reason to believe it's going to happen,
Project Vivarium is going to be the most exciting place in the world
to work.
[...]
------------------------------
Date: 5 Feb 86 13:32:05 GMT
From: decwrl!pyramid!pesnta!phri!greenber@ucbvax.berkeley.edu (Ross
Greenberg)
Subject: A contest in 'C'...
There is a game making the rounds on some of the MS-DOS BBS's called
CROBOTS. An interesting game that can allow those that respond to
determine just how good their 'C' programming is.
In this game, you program your "robot" to seek out and destroy other
robots that have been programmed by someone else. Each robot has the
capability of movement, sensor detection of other robots, and the
ability to fire a cannon at a given direction and range.
Typical robots might use programs that allow the robot to scan the
playfield, locate any one of four opponents, fire a cannon at that
opponent, and start zig-zagging towards that opponent while firing
a cannon.
If you are interested in determining how *your* robot stands up
to other robots, then here are the contest rules:
1) Get a copy of the program from a local MS-DOS machine.
There may be a UNIX version out, but I'm not aware of
it
2) Create a robot that will (2 out of 3 times), destroy
the preconfigured robots that come in the .ARC package.
3) Document your robot's code and send it off to me at the
below address. Entries accepted until March 1, 1986.
4) You may enter no more than two robots.
The way I'll run the contest should work, although comments are
welcomed:
For every four robots that come in, I'll send them off to battle.
I'll run the simulation twice for each four, or until a have a
clear consensus of which two out of the robots make it to the next
round.
This process will be repeated until there are finally only four
top robots. They'll slug it out until I can determine which are
the top two. From that, of course, I can determine which is the
robot that deserves the applause.
The top four robots will be posted to the net. Each losing robot
will be returned to its designer, along with the code for the
robots which destroyed it.
Consider this first contest the beginning round. The next round
will be in about three months.
And I forgot to tell you where some of these boards are....
Two that I know of are:
NYACC (New York Amateur Computer Club) at 1-718-539-3338
and my board at 1-212-889-6438, login with 'demo' and 'demo'.
Happy Robot Designing....
Good Luck!
Ross
ross m. greenberg
ihnp4!allegra!phri!sysdes!greenber
[phri rarely makes a guest-account user a spokesperson. Especially not me.]
------------------------------
Date: 8 Feb 86 15:54:14 GMT
From: ulysses!mhuxr!mhuxt!houxm!mtuxo!npois!npoiv!bad@ucbvax.berkeley.edu
(Bruce Dautrich)
Subject: Re: A contest in 'C'...
This games sounds like a game called bolo which to my knowledge
was first written by Peter Langston who also wrote empire.
------------------------------
End of AIList Digest
********************
∂14-Feb-86 0240 LAWS@SRI-AI.ARPA AIList Digest V4 #27
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 Feb 86 02:40:18 PST
Date: Thu 13 Feb 1986 22:15-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #27
To: AIList@SRI-AI
AIList Digest Friday, 14 Feb 1986 Volume 4 : Issue 27
Today's Topics:
Query - OPS5 Demo,
Cognitive Psychology - Knowledge Structures,
Games & Logic - Prisoners' Dilemma Computer Programs Tournament
----------------------------------------------------------------------
Date: 6 Feb 86 22:19:00 GMT
From: pur-ee!uiucdcs!convex!ctvax!kerry@ucbvax.berkeley.edu
Subject: OPS5 Demo Needed
Does anyone know where I can get a good production system demo that will
run on the FRANZ LISP version of OPS5 (VPS)?
------------------------------
Date: Fri, 14 Feb 86 00:56:33 EST
From: Mark Weiser <mark@mimsy.umd.edu>
Reply-to: mark@maryland.UUCP (Mark Weiser)
Subject: Re: Cognitive Psychology - Knowledge Structures
In article <8602111536.AA15674@sally.UTEXAS.EDU>
sally!bulko (Bill Bulko) writes:
> Chi, M. T. H., P. Feltovich, and R. Glaser, "Categorization and
> Representation of Physics Problems by Experts and Novices." Cognitive
> Science, Vol. 5, No. 2, April-June 1981.
> The authors compare the ways experts and novices categorize physics problems
> and form physical models of the problems based on the categories created.
> Studies are presented which investigate the implications of the differences
> found for problem solving in general.
A related paper is :
Mark Weiser and Joan Shertz. "Programming problem representation
in novice and expert programmers." International Journal of
Man-Machine Studies. December 1983. pp. 391-398.
This paper is an application of some of the Chi, Feltovich, and Glaser
methodology to the problem space of programming, with generically
similar results. Differences in detail include categories of
problem-solving used and not used by experts (algorithms yes,
data-structures no), and differences between expert programmers
and expert former programmers now programming managers.
-mark
Spoken: Mark Weiser ARPA: mark@maryland Phone: +1-301-454-7817
CSNet: mark@umcp-cs UUCP: {seismo,allegra}!umcp-cs!mark
USPS: Computer Science Dept., University of Maryland, College Park, MD 20742
------------------------------
Date: 7 Feb 86 10:08:45 PST
From: MEGIDDO@IBM-SJ.ARPA
Subject: Prisoners' Dilemma Computer Programs Tournament
First Announcement of a
COMPUTER PROGRAMS TOURNAMENT
(of the Prisoners' Dilemma game)
1. INTRODUCTION
This is a first announcement of a tournament for computer programs,
playing the famous Prisoners' Dilemma game. Detailed instructions and
some background information are provided below. The tournament is
organized for the purpose of research and no prizes are offered. It
is intended however that the results and winners' names will be
published with permission from the persons involved. One of the goals
is to see what will happen during a SEQUENCE of tournaments in which
information about the participating programs will be released, so that
participants will be able to revise their programs. The tournament is
open to everyone. However, notice the warnings below. If you have
access to electronic mail then you can participate by submitting a
FORTRAN program according to the instructions below. By doing so you
will also release and waive all your copyright rights and any other
intellectual property rights to your program. It will also be assumed
that you have not violated any rights of any third party. This
announcement also includes some programs that will help you prepare
for the tournament.
2. BACKGROUND
The so-called prisoners' dilemma game has drawn the attention
of researchers from many fields: psychology, economics, political
science, philosophy, biology, and mathematics. Computer scientists
are also interested in this game in the context of fundamentals of
distributed systems.
The game is simple to describe, does not require much skill and is yet
extremely interesting from both the theoretical and practical points
of view. By the (one-shot) Prisoners' Dilemma game we refer to a game
as follows. The game is played by two players with symmetric roles.
Each has to choose (independently of the other) between playing action
C ("cooperate") or action D ("defect"). The scores to the two
players, corresponding to the four possible combinations of choices of
actions, are as shown in the following table:
Player 2
C D
---------------
| 3 | 4 |
C | | |
| 3 | 0 |
Player 1 |-------|-------|
| 0 | 1 |
D | | |
| 4 | 1 |
---------------
Thus, both players score 3 if both play C. Both score 1 if both play D.
If one plays C and the other one plays D then the one who plays C scores
0 while the other one scores 4.
The prisoner's dilemma game has been the subject of many experiments.
A tournament was organized several years ago by R. Axelrod who later
published a book on it under the title "The evolution of cooperation"
(Basic Books, Inc., New York, 1984).
Following is some discussion for the benefit of readers who are not
familiar with the fundamental considerations of how to play the game. One
should be careful to distinguish the one-shot game from the REPEATED game
in which the (one-shot) game is played many times, and after each round
both players are informed of each other's actions. Furthermore, one
should distinguish between the infinitely repeated game and the finitely
repeated one. These seem to be quite different from the point of view
of equilibrium. An equilibrium in a 2-person game is a pair (S1,S2) of
strategies (one for each player) such that, given that player i (i=1,2)
is playing Si , the other player, j=3-i, scores the maximum if he plays
Sj .
We are interested here in the finitely repeated game where the number
of rounds is known in advance. We first consider the one-shot game.
The analysis of the one-shot game is obvious. Each of the players
realizes that no matter what his opponent does, it is always better
for him to play D rather than C. Thus, under a very weak assumption
of rationality (namely, players do not choose actions that are
strictly dominated by other actions), the pair of actions (D,D)
remains the only rational choice. The resulting score of (1,1) is
inferior to (3,3), which is possible if the choices are (C,C), and
this is the source of the "dilemma".
To get some insight into the more general case, consider first
the 2-round game. After the first round (in which the players choose
independently C or D) each player is informed of the choice of the
other one and then, once again, the players choose independently C or
D. In this game each player has EIGHT strategies that can be coded in
the form XYZ where each of X,Y and Z equals either C or D. The
interpretation of this notation is as follows. (1) Play X in round 1.
(2) In round 2, play Y if the opponent played C and play Z if the
opponent played D. It is easy to verify that any strategy XYZ is
strictly dominated by XDD (that is, regardless of what was done in
round 1, and regardless of what the opponent does in round 2, it is
better to play D rather than C in round 2. However, there is no
domination relation between the strategies CDD and DDD: if player 2
plays DDD then player 1 is better off playing DDD rather than CDD,
whereas if player 2 plays DDC, player 1 is better off playing CDD
rather than DDD. Of course, strategy DDC for player 2 is dominated by
DDD, but in order for player 1 to deduce that player 2 will not play
DDC, he has to assume that player 2 is capable of discovering this
domination. Under such an assumption player 1 can eliminate 2's DDC.
Thus, if both players are "rational" they are left only with strategy
DDD as a reasonable choice.
A similar process of repeatedly eliminating dominated strategies
applies to the general N-round game. It is dominant for both players
to defect in the last round. Therefore (after we drop all strategies
that play C in the last round), it becomes dominant to defect in round
N-1, and so on. This eventually leaves both players only with the
strategy of always playing D.
The winner in both tournaments run by R. Axelrod was the simple
strategy called "Tit-for-Tat". It starts by playing C and in round i+1
plays whatever the opponent played in round i. It seems like a very good
strategy for playing the repeated dilemma for an indefinite number of
rounds. In the N-round game it is obvious that an improvement over Tit-
for-Tat would be to play Tit-for-Tat except for the last round in which
the optimal play is always to defect.
3. HOW TO PARTICIPATE IN THE TOURNAMENT?
If you think you understand the dilemma quite well and would like to
participate in this tournament then please act according to the following
instructions:
1. Design a strategy of how to play the game when the number of rounds
is known in advance. The strategy should specify what to do in round 1
and at any point of the game, knowing what has been done so far and the
number of rounds left, specify what to do in the next round.
2. Write a FORTRAN subroutine with the following specifications. Give
it a six-letter name, for example, the first four letters of your last
name followed by two initials. Suppose you picked the name JONERJ for
your subroutine. Then the first line of your program should look as
follows.
SUBROUTINE JONERJ (N,J,I,M)
The arguments are defined as follows.
N - This is the total number of rounds to be played. Whenever your
program is called it is told the total number of rounds and
this will not change during a single game.
J - This is the serial number of the round you are supposed to play in
the current call.
I - When J is greater than 1, this argument tells you what your opponent
has played in the previous round. If I=1 it means your opponent has
played C. If J=2 then he played D. Any other value is an error.
M - This is what you return as your play in the current round. M=1 means
you play C. M=2 means you play D. Any other value will result in an
error.
Your subroutine may compute anything you wish. In particular, it may
keep track of the entire history of a single (N-round) game. However,
it will not be able to record past games against any opponent since it
will be unloaded at the end of a single N-round game. Please be
reasonable with respect to the space and time you intend your program to
use. Unreasonable programs will have to be dropped from the tournament
at the discretion of the organizers. Also, if your program ever returns
a faulty play, that is, it returns an M which is neither 1 nor 2, then it
will be dropped from the tournament automatically.
3. Fill in the following information (to be transmitted only by
electronic mail):
NAME:←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
AFFILIATION:←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
STREET:←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
CITY:←←←←←←←←←←←←←←←←←←← STATE:←←←←←←←←←←←←← Zip:←←←←←←←←←←←←←←←
COUNTRY:←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
TELEPHONE:←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ELECTRONIC MAIL ADDRESS:←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
4. Important notice!
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
| By sending your program to any one of the following |
| addresses you agree to waive and release, to the extent |
| permitted by law, all your copyright rights and other |
| intellectual property rights in your computer program. |
| You also warrant that no portion of your program or its |
| use or distribution, violates or is protected by any |
| copyright or other intellectual property right of any |
| third party. You also warrant you have the right to, |
| and hereby do, grant to IBM a royalty-free license to |
| use your program. If any contestant is a minor under |
| the laws of the state in which contestant resides, at |
| least one of the contestant's parents should sign this |
| warranty and license. IBM may elect to publish the |
| results of the contest; names of participants or their |
| submissions will not be published without the written |
| approval and signature of the individual authors. |
|←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←|
Please transmit your program by March 31, 1986, along with the filled
questionnaire to one of the following addresses:
CSNET or ARPANET: megiddo@ibm-sj
VNET or BITNET : megiddo at almvma
4. TRAINING PROGRAM
For your convenience, we include here an interactive program that lets
you play the game with another "player". While playing this interactive
program please remember that your goal is actually to SCORE high and not
necessarily to BEAT the other player. In the tournament, your ability
to affect the player's total score is limited since he plays against many
other players besides you. Thus you will benefit if you will create
"confidence" so that both of you end up playing C very often. You have
the option of either playing yourself or using the subroutine that
represents you. If you use a subroutine then you have to name it MINE
and follow the instructions in Section 3. Simply append it the following
program. It is advised that you use this option to test your own program
before submitting it to the tournament.
INTEGER SCORE,SCORE2,CH1,CH2,PRE1,PRE2,CC,DD,CD,DC
C
DATA CC,DD,CD,DC/3,1,0,4/
20 SCORE = 0
SCORE2 = 0
PRE1=1
PRE2=1
WRITE(6,102)
102 FORMAT(' ENTER NUMBER OF ROUNDS YOU WISH TO PLAY (0=END)')
103 FORMAT (I6)
READ (5,*) NR
IF (NR.LE.0) STOP
118 FORMAT(' WILL YOU (1) PLAY OR WILL YOUR SUBROUTINE (2) DO? (1/2)')
430 WRITE (6,118)
READ (5,*) II
IF (II.EQ.2) GO TO 420
IF (II.NE.1) GO TO 430
420 DO 30 JR = 1, NR
104 FORMAT(' ROUND NO.',I6,' OF',I6,' ROUNDS. PLEASE ENTER 1 OR 2')
IF (II.EQ.2) GO TO 440
WRITE (6,104) JR,NR
40 CONTINUE
READ (5,*) CH1
GO TO 450
440 CALL MINE(NR,JR,PRE2,CH1)
IF ((CH1-1)*(CH1-2)) 470,71,470
470 WRITE (6,117)
117 FORMAT (' YOUR SUBROUTINE RETURNED A FAULTY PLAY')
GO TO 20
450 IF ((CH1-1)*(CH1-2)) 70,71,70
70 IF (CH1.EQ.0) GO TO 20
105 FORMAT(' PLEASE ENTER EITHER 1 OR 2 . (0=END)')
WRITE (6,105)
GO TO 40
71 IF (JR-1) 220,220,230
220 CH2 = 1
IF (NR.EQ.1) CH2 = 2
GO TO 300
230 IF (JR-NR) 250,260,260
250 CH2 = PRE1
GO TO 300
260 CH2 = 2
107 FORMAT(' PLAY WAS: YOU=',I3,' OPPONENT=',I3)
300 WRITE(6,107) CH1,CH2
IF (CH1-1) 110,110,120
110 IF (CH2-1) 130,130,140
130 SCORE = SCORE + CC
SCORE2 = SCORE2 + CC
GO TO 35
140 SCORE = SCORE + CD
SCORE2 = SCORE2 + DC
GO TO 35
120 IF (CH2-1) 150,150,160
150 SCORE = SCORE + DC
SCORE2 = SCORE2 + CD
GO TO 35
160 SCORE = SCORE + DD
SCORE2 = SCORE2 + DD
35 WRITE (6,106) SCORE,SCORE2
106 FORMAT (' NEW TOTAL SCORE: YOU=',I5,' OPPONENT=',I5)
PRE1=CH1
PRE2=CH2
30 CONTINUE
GO TO 20
END
5. SAMPLE PROGRAMS
For your convenience we include here copies of two sample programs.
The first subroutine, called TIFRTA, plays Tit-for-Tat (see Section 2)
except that it always defects in the last round. The second, called
GRIM, starts playing C but switches to D the first time th opponent has
played D. It also always defects in the last round.
SUBROUTINE TIFRTA (N,J,IHE,MY)
C
C THIS IS THE TIT-FOR-TAT RULE. IN ROUND 1 PLAY 1. IN ROUND N
C PLAY 0. OTHERWISE, PLAY WHAT THE OPPONENT PLAYED IN THE PRECEDING
C ROUND.
C
C N = TOTAL NUMBER OF ROUNDS
C J = CURRENT ROUND
C IHE = THE CHOICE OF THE OPPONENT IN THE PRECEDING ROUND (1 OR 2)
C MY = MY CHOICE FOR THE CURRENT ROUND (1 OR 2)
C
IF (J-1) 20,20,30
20 MY = 1
IF(N.EQ.1) MY=2
RETURN
30 IF (J-N) 50,60,60
50 MY = IHE
RETURN
60 MY = 2
RETURN
END
C
C
SUBROUTINE GRIM (N,J,IHE,MY)
C
C THIS IS THE GRIM STRATEGY: START WITH C AND SWITCH TO D
C AS SOON AS THE OPPONENT DOES
C
IF (J-1) 10,10,20
10 IX = 1
20 IF (IHE.EQ.2) IX = 2
IF (J.EQ.N) IX = 2
MY = IX
RETURN
END
------------------------------
End of AIList Digest
********************
∂16-Feb-86 2310 LAWS@SRI-AI.ARPA AIList Digest V4 #28
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 16 Feb 86 23:10:41 PST
Date: Sun 16 Feb 1986 20:57-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #28
To: AIList@SRI-AI
AIList Digest Monday, 17 Feb 1986 Volume 4 : Issue 28
Today's Topics:
Seminars - A Design System for Engineering (MIT) &
Fuzzy Logic and Common Sense Knowledge (SD Sigart) &
Knowledge Engineering, Ontology (Oregon State) &
Explanation-Based Learning (MIT) &
Reactive Systems (SRI) &
Temporal Logic for Concurrent Programs (CMU),
Course - Spring Quarter Seminar on Rule-Based Systems (SU)
----------------------------------------------------------------------
Date: 13 Feb 1986 10:39 EST (Thu)
From: Claudia Smith <CLAUDIA%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Seminar - A Design System for Engineering (MIT)
[Forwarded from the MIT bboard by SASW@MIT-MC.]
AN INTEGRATED DESIGN SYSTEM
FOR ENGINEERING
Robin J. Popplestone
Edinburgh University
Scotland
I discuss the representation of mechanical engineering designs in a
Logic programming context, and the exploration of a space of different
possible designs. Designs are represented in terms of modules, which
are basic concrete engineering entities (eg. motor, keyway, shaft).
Modules interact via ports, and have an internal structure expressed
by the part predicate. A taxonomic organisation of modules is used as
the basis for making design decisions. Subsystems employed by the
design system include the spatial relational inference mechanism
employed in the RAPT robot Language, the Noname geometric modeller
developed at Leeds University and the Press symbolic equation solver.
The system is being implemented in the POPLOG system. An assumption
based truth maintenance system based on the work of de Kleer is being
implemented to support the exploration of design space.
Tuesday, Feb. 18, 1986
4pm
NE43, 8th Floor Playroom
Hosts: Professors Brooks and Lozano-Perez.
------------------------------
Date: 14 Feb 86 09:01 PST
From: sigart@LOGICON.ARPA
Subject: Seminar - Fuzzy Logic and Common Sense Knowledge (SD Sigart)
The San Diego SIGART presents
FUZZY LOGIC AND COMMON SENSE KNOWLEDGE
Featured Speaker:
Dr. Lotfi A. Zadeh
Thursday, Feb 20, 1986
6:30-8:30pm at UCSD
Humanities Library Rm. 1438
Dr.Zadeh will introduce the concept of a disposition and the principle
that common sense knowledge is of a dispositional nature, i.e. we can
infer dispositional rules which are true in most cases.
The concept of dispositionality leads to the concept of usuality or the
usual value of variables. We need to develop a system for computing
with and inferring from dispositional knowledge. Dr. Zadeh will show
how to use fuzzy logic to deal with the concepts of dispositionality
and usuality in a way which cannot be done with classical logic. Fuzzy
logic will therefore be shown to provide a framework for commonsense
reasoning.
------------------------------
Date: Thu, 13 Feb 86 09:46:27 pst
From: Tom Dietterich <tgd%oregon-state.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Knowledge Engineering, Ontology (Oregon State)
KNOWLEDGE ENGINEERING AS THE INVESTIGATION
OF ONTOLOGICAL STRUCTURE
Michael J. Freiling
Tektronix Laboratories
Beaverton, Oregon
Wednesday, February 12, l986
Cordley Hall, Room 1109
Oregon State University
Corvallis, Oregon
Experience has shown that much of the difficulty of learning to build
knowledge-based systems lies in learning to design representation structures
that adequately capture the necessary forms of knowledge. Ontological
analysis is a method we have found quite useful at Tektronix for analyzing
and designing knowledge-based systems. The basic approach of ontological
analysis is a step-by-step construction of knowledge structures beginning
with basic objects and relationships in the task domain, and continuing
through representations of state, state transformations, and heuristics for
selecting transformations. Formal tools that can be usefully employed in
ontological analysis include domain equations, semantic grammars, and
full-scale specification languages. The principles and tools of ontological
analysis are illustrated with actual examples from knowledge-based systems
we have built or analyzed with this method.
------------------------------
Date: Fri, 14 Feb 86 15:20 EST
From: Brian C. Williams <WILLIAMS@OZ.AI.MIT.EDU>
Subject: Seminar - Explanation-Based Learning (MIT)
[Forwarded from the MIT bboard by SASW@MIT-MC.]
Thursday , February 20 4:00pm Room: NE43- 8th floor Playroom
The Artificial Intelligence Lab
Revolving Seminar Series
Explanation-Based Learning
Tom Mitchell
Rutgers University, New Brunswick, NJ
The problem of formulating general concepts from specific training
examples has long been a major focus of machine learning research.
While most previous research has focused on empirical methods for
generalizing from a large number of training examples using no
domain-specific knowledge, in the past few years new methods have been
developed for applying domain-specific knowledge to formulate valid
generalizations from single training examples. The characteristic
common to these methods is that their ability to generalize from a
single example follows from their ability to explain why the training
example is a member of the concept being learned. This talk proposes a
general, domain-independent mechanism, call EBG, that unifies previous
approaches to explanation-based generalization. The EBG method is
illustrated in the context of several example problems, and used to
contrast several existing systems for explanation-based generalization.
The perspective on explanation-based generalization afforded by this
general method is also used to identify open research problems in this
area.
------------------------------
Date: Fri 14 Feb 86 18:27:38-PST
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - Reactive Systems (SRI)
AN ARCHITECTURE FOR INTELLIGENT REACTIVE SYSTEMS
OR
HOW NOT TO BE EATEN BY A TIGER
Leslie Kaelbling
SRI International AI Center and Stanford University
11:00 AM, WEDNESDAY, February 19
SRI International, Building E, Room EJ228 (new conference room)
In this talk I will present an architecture for intelligent reactive
systems. The ideas are fairly general, but are intended for use in
programming Flakey to carry out complex tasks in a dynamic environment.
Many previous robots simply 'closed their eyes' while a time-consuming
system, such as a planner or vision system, was invoked, allowing
perceptual inputs either to be lost or saved for later processing. In a
truly dynamic world, things might change to such an extent that the
results of the long calculation would no longer be useful. Worse yet,
the robot might run into a wall or be eaten by a tiger. This
architecture will allow the robot to remain aware during long
computations, and to behave plausibly in novel situations.
This talk represents work in progress, so much of the seminar will
be devoted to general discussion.
------------------------------
Date: 14 February 1986 1045-EST
From: Cathy Hill@A.CS.CMU.EDU
Subject: Seminar - Temporal Logic for Concurrent Programs (CMU)
Speaker: Aravinda Prasad Sistla <aps0%gte-labs.csnet@CSNET-RELAY.ARPA>
Date: February 19, 1986
Time: 1:30 - 3:00 pm
Place: WeH 4623
Title: ON EXPRESSING SAFETY AND LIVENESS PROPERTIES IN TEMPORAL
LOGIC.
Correctness properties of concurrent programs are usually classified as
either safety properties or liveness properties. In general, proving a program
correct involves in establishing that the program satisfies certain safety
properties and certain liveness properties, and usually different techniques
are applied in proving these properties. In this talk we consider many
different definitions of these properties (e.g. safety,strong safety,liveness,
absolute liveness etc.) and investigate what classes of these properties are
expressible in temporal logic. We present syntactic characterization of
formulae that express these properties. Finally, we give algorithms to
recognize if a temporal specification is a safety property or liveness
property.
------------------------------
Date: Wed 12 Feb 86 15:47:58-PST
From: Ted Shortliffe <Shortliffe@SUMEX-AIM.ARPA>
Subject: Course - Spring Quarter Seminar on Rule-Based Systems (SU)
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
SEMINAR ON RULE-BASED EXPERT SYSTEMS
Professors Buchanan and Shortliffe
Comp. Sci. 524 Med. Inf. Sci. 229
Spring Quarter 1986 - 2 units
Tuesday, 3:30-5:00PM
TC-135 Conference Room, Medical Center
[Class size limited to 16]
This course is a graduate seminar for students wishing to gain a technical
understanding of, as well as a historial perspective on, rule-based expert
systems. The emphasis of the course will be on an analysis of the research
lessons of MYCIN and related projects in the Knowledge Systems Laboratory,
the strengths and limitations of the rule-based approach to knowledge
representation, and the way in which AI research evolves as new ideas and
concepts are discovered.
The course will meet weekly for 90 minutes and will require substantial
reading assignments for each session. The required text for the seminar is
"Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic
Programming Project"; additional related papers will also be assigned.
Working in pairs, all students will be responsible for leading the discussion
once during the quarter. There will be a final exam.
Prerequisites: at least one course in artificial intelligence and
familiarity with LISP.
Enrollment: limited to 16; signup in TC-135 or by contacting Ms.
Alison Grant (GRANT@SUMEX or 7-6979). If the course is
oversubscribed, preference will be given as follows:
MS/AI and MIS grad students, other CSD grad students,
non-CSD graduate students and medical students, CSD
research staff, undergraduates, auditors.
2 units, Tu 3:30-5:00, Room TC-135 (Medical Center), Professors
Buchanan and Shortliffe. The course will not be
offered again until 1987-88.
April 1: INTRODUCTION
Readings: None
April 8: KNOWLEDGE ENGINEERING
Readings: Chapters 1,7,35,8,9 [Chapter 4 suggested before 7 for
those unfamiliar with MYCIN]
April 15: USING RULES
Readings: Chapters 2,3,5,6
April 22: REASONING UNDER UNCERTAINTY
Readings: Chapters 10,11,12,13 [updated version of Chapter 13 will
be provided]
April 29: GENERALIZED FRAMEWORKS
Readings: Chapters 14,15,16,33
May 6: OTHER REPRESENTATIONS OF KNOWLEDGE
Readings: Chapters 21,22,23,24
May 13: EXPLANATIONS/TUTORING
Readings: Chapters 17,18,20,25,26
May 20: META-LEVEL KNOWLEDGE
Readings: Chapters 27,28,29
May 22 (Thursday class, 3:30-5pm): EVALUATING PERFORMANCE
Readings: Chapters 30,31
May 27: no class
Readings: Chapters 32,34,36
June 3: SUMMARY AND CONCLUSIONS
Readings: None
------------------------------
End of AIList Digest
********************
∂17-Feb-86 0055 LAWS@SRI-AI.ARPA AIList Digest V4 #29
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Feb 86 00:52:25 PST
Date: Sun 16 Feb 1986 21:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #29
To: AIList@SRI-AI
AIList Digest Monday, 17 Feb 1986 Volume 4 : Issue 29
Today's Topics:
Queries - Expert Systems Information & Rule Master Reviews &
Games, Evolution and Learning Conference & Chess & Micro Prolog,
Bindings - Prisoner's Dilemma Mailing List,
Machine Learning - Hopfield Networks,
Software Review - Personal Computer Scheme
----------------------------------------------------------------------
Date: Sun 16 Feb 86 22:36:39-EST
From: "Randall Davis" <DAVIS%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Reply-to: davis@mit-mc
Subject: access to info
I'd like to assemble a list of resources of information about expert systems,
organized along the lines indicated below. If you can think of a
journal,
trade magazine,
newsletter, or
regularly scheduled conference
not listed below, and can supply the relevant details, please send them to me
(not to the whole list, and please only respond if you have the details
available and accurate). I'll filter the responses to eliminate duplicates
and re-post to the list it for general consumption.
++++++++++++++++++++++++++++++++++++++++
FORMAT FOR PUBLICATIONS
NAME
PUBLISHER
EDITOR(S)
SUBSCRIPTION INFORMATION (frequency of publication, price)
SUBSCRIPTION ADDRESS (where to write)
CATEGORY: one of RESEARCH JOURNAL (eg, Artificial Intelligence)
RESEARCH NEWSLETTER (eg, AAAI Magazine, SIGART)
COMMERCIAL NEWSLETTER (eg, Expert Systems Strategies)
FOCUS: eg, all areas of AI, expert systems technical issues, management issues,
etc.
FORMAT FOR CONFERENCES
NAME
SPONSORING ORGANIZATION
FREQUENCY OF OCCURENCE
ADDRESS FOR INFORMATION
I have details for
Journals
AI Journal
Journal of Automated Reasoning
Newsletters
AAAI Magazine
Expert Systems Strategies
Conferences
IJCAI, AAAI
and would welcome all other info, especially non-US listings.
------------------------------
Date: Fri 14 Feb 86 12:56:42-PST
From: Bill Park <PARK@SRI-AI.ARPA>
Subject: Rule Master Reviews?
To whom it may concern:
If you have any experience with Rule Master, would you please tell me
what you think of it? We are considering using it in a project
related to NASA's space station.
Thanks,
Bill Park (Park@SRI-AI)
(415)859-2233
SRI International
Menlo Park, CA
------------------------------
Date: 14 Feb 86 02:43:55 GMT
From: nike!im4u!ut-sally!ut-ngp!gknight@ucbvax.berkeley.edu (gknight)
Subject: Games, Evolution & Learning Conference query.
Can anyone give me information on a conference entitled "Games,
Evolution & Learning," held in New Mexico in 1984 or 1985?
The organizer (or any other contact person)? Proceedings, if
available? A list of speakers and paper titles? Etc., etc.
Please send by mail directly to me and I'll post a summary of
info received for the information of others on the nets.
Many thanks,
Gary Knight, 3604 Pinnacle Road, Austin, TX 78746 (512/328-2480).
Biopsychology Program, Univ. of Texas at Austin. "There is nothing better
in life than to have a goal and be working toward it." -- Goethe.
------------------------------
Date: 14 Feb 86 09:42 EST
From: Vu.wbst@Xerox.COM
Subject: Chess game informations needed.
I'm reading about expert system, and would like to try to build an
expert system. I would appreciate any helpfull hints, pointers to any
existing Chess game expert system, in Interlisp-D would a plus. I would
like to thank you in advance for any help.
Dinh
Regular mail:
Dinh Vu
Xerox Corporation
800 Philips Rd, Bld 129-38B
Webster, Ny 14580.
------------------------------
Date: Fri 14 Feb 86 16:56:18-EST
From: FWHITE@G.BBN.COM
Subject: Prolog on VMS and/or MAC
Does anybody know of a public domain version of Prolog for
VAX/VMS or the Macintosh? Or how about a commercial version?
Jeff Berliner (BERLINER@G.BBN.COM)
------------------------------
Date: 15 Feb 86 18:43:00 PST
From: MEGIDDO@IBM-SJ.ARPA
Subject: Prisoner's Dilemma
Prisoner's dilemma tournament mailing list;
Please send back a note if you wish to receive future announcements.
------------------------------
Date: 7 Feb 86 20:13:13 GMT
From: decwrl!pyramid!ut-sally!mordor!ehj@ucbvax.berkeley.edu (Eric H Jensen)
Subject: Re: Hopfield Networks?
In article <1960@peora.UUCP> jer@peora.UUCP (J. Eric Roskos) writes:
>In a recent issue (Issue 367) of EE Times, there is an article titled
>"Neural Research Yields Computer that can Learn". This describes a
>simulation of a machine that uses a "Hopfield Network"; from the ...
I got the impression that this work is just perceptrons revisited.
All this business about threshold logic with weighting functions on
the inputs adjusted by feedback (i.e. the child reading) ...
Anybody in the know have a comment?
eric h. jensen (S1 Project @ Lawrence Livermore National Laboratory)
Phone: (415) 423-0229 USMail: LLNL, P.O. Box 5503, L-276, Livermore, Ca., 94550
ARPA: ehj@angband UUCP: ...!decvax!decwrl!mordor!angband!ehj
[What is new is that there are now training algorithms for multilayer
networks -- something that Minsky and Papert declared unlikely in their
famous book on perceptrons. Also new is the development of special
hardware, both chips and full [Boltzmann] processors for the implementation
of such networks. Hopfield networks require symmetric connections and
a form of "relaxation" processing or simulated annealing; Hopfield
characterizes this as constraint satisfaction or discrete optimization
by moving through the center of a data space (in somewhat the same manner
as the Karmarkar algorithm) instead of touring the vertices in the manner
of the simplex algorithm. Other multilayer connectionist networks have
recently been developed that do not require symmetric or even feedback
connections, except for training feedback. The breakthrough in these
latter networks seems to be the notion of adjusting each coefficient
in proportion to its "responsibility" in making a good or bad decision.
Determination of proportionate responsibility can be made using partial
derivatives. Another possibility that I find intriguing is the use
of a domain-knowledgeable expert system for identifying "guilty"
coefficients, as in the system for predicting horse races reported in
Heuristics for Inductive Learning by Steven Salzberg of Applied Expert
Systems, IJCAI 85, pp. 603-609. -- KIL]
------------------------------
Date: Tue 11 Feb 86 22:38:38-CST
From: Rob Pettengill <CAD.PETTENGILL@MCC.ARPA>
Subject: Personal Computer Scheme
I recently purchased an implementation of the Scheme dialect of
lisp for my PC. I am familiar with GC Lisp, IQ Lisp, and Mu Lisp
for the PC. I use Lambdas and 3600s with ZetaLisp at work.
TI PC Scheme is a very complete implementation of scheme for the
IBM and TI personal computers and compatibles. It combines high
speed code execution, a good debugging and editing environment,
and very low cost.
The Language:
* Adheres faithfully to the Scheme standard.
* Has true lexical scoping.
* Prodedures and environments are first class data objects.
* Is properly tail recursive - there is no penalty compared
to iteration.
* Includes window and graphics extensions.
The Environment:
* An incremental optimizing compiler (not native 8086 code)
* Top level read-compile-print loop.
* Interactive debugger allows run time error recovery.
* A minimal Emacs-like full screen editor with a scheme mode
featuring parethesis matching and auto indenting of lisp code.
* An execute DOS command or "push" to DOS capability - this is
only practical with a hard disk because of the swap file PCS writes.
* A DOS based Fast Load file format object file conversion utility.
* A fast 2 stage garbage collector.
First Impressions:
Scheme seems to be much better sized to a PC class machine than
the other standard dialects of lisp because of its simplicity. The
TI implementation appears to be very solid and complete. The compiled
code that it produces (with debugging switches off) is 2 to 5 times
faster than the other PC lisps that I have used. With the full screen
editor loaded (there is also a structure editor) there seems to be
plenty of room for my code in a 640k PC. TI recommends 320k or 512k
with the editor loaded. The documentation is of professional quality
(about 390 pages), but not tutorial. Abelson and Sussman↑2's "Structure
and Interpretation of Computer Programs" is a very good companion for
learning scheme as well as the art and science of programming in general.
My favorite quick benchmark -
(define (test n)
(do
((i 0 (1+ i))
(r () (cons i r)))
((>= i n) r)))
runs (test 10000) in less than 10 seconds with the editor loaded - of course
it takes a couple of minutes to print out the ten thousand element list
that results.
The main lack I find is that the source code for the system is not
included - one gets used to that in good lisp environments. I have
hit only a couple of minor glitches, that are probably pilot error,
so far. Since the system is compiled with debugging switches off
it is hard to get much useful information about the system from
the dubugger.
Based on my brief, but very positive experience with TI PC scheme and
its very low price of $95 - I recommend it to anyone interested in a
PC based lisp. You can order from Texas Instruments at 1-800-TI-PARTS.
(Standard disclaimers about personal opinions and having no commercial
interest in the product ...)
Rob Pettengill
------------------------------
End of AIList Digest
********************
∂17-Feb-86 0234 LAWS@SRI-AI.ARPA AIList Digest V4 #30
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Feb 86 02:34:18 PST
Date: Sun 16 Feb 1986 22:35-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #30
To: AIList@SRI-AI
AIList Digest Monday, 17 Feb 1986 Volume 4 : Issue 30
Today's Topics:
Query - Ambiguous Sentences,
Cognitive Psychology - Definition & Novice-Expert Differences,
Theory - Dreyfus' Technology Review Article
----------------------------------------------------------------------
Date: Fri 14 Feb 86 14:29:21-PST
From: FIRSCHEIN@SRI-AI.ARPA
Subject: Ambiguous sentences.
I wonder whether AILIST readers have a favorite short sentence for
illustrating multiple ambiguity, say greater than 5 meanings?
------------------------------
Date: Sat, 15 Feb 86 21:38:39 EST
From: bzs%bostonu.csnet@CSNET-RELAY.ARPA
Subject: Re: Sparklers from the Tech Review
>From: larry@Jpl-VLSI.ARPA
>COGNITIVE PSYCHOLOGY (a more restricted area than Cognitive Science) attempts
>to understand biologically based thinking using behavioral and psychiatric
>concepts and methods. This includes the effects emotional and social forces
>exert on cognition. This group is increasingly borrowing from the following
>groups.
Just curious, but as an undergraduate studying such things at Cornell
in the early 70's I remember being lectured over and over again about
'Cognitive Psychology' which at that point in time seemed to be a
school derived largely from Festinger's works in Cognitive Dissonance
et al (and Brehm and others.) It was generally posed as being
orthogonal to behaviorism (Skinnerianism.) Is this the same 'cognitive
psychology' I suffered through? Or has the term changed? What do they
call the old stuff, or are we allowed to speak of it anymore (oops)? I
suppose this definition *might* be referring to the same thing, but I
don't see how.
-Barry Shein, Boston University
------------------------------
Date: 14 Feb 86 14:38:36 EST (Fri)
From: Robert Rist <rist@YALE.ARPA>
Subject: novice-expert differences
You can trace back the articles you need if you look at
Snow, R. E., Federico, P. & Montague, W. E. (Eds.). (1980) Aptitude,
learning and instruction, Volume 2. This has articles by VanLehn and
Brown, Stevens and Collins, Anderson and Norman.
Lesgold, A. M. (1984). Acquiring expertise. In Anderson and Kosslyn
(Eds.), Tutorials in learning and memory. Pointers to lots of
different research domains.
Chi, M. T. H., Glaser, R. & Rees, E. (1982). Expertise in problem
solving. In Sternberg, R. J. (Ed.), Advances in the psychology of
human intelligence. This is one of the best summary articles I have
seen.
Anderson, J. R. (Ed.) (1981). Cognitive skills and their acquisition.
A mixed bag, but it contains some real classics.
Gentner, D. & Stevens, A. L. (1983). Mental models. The stuff on
multiple models and debugging is very interesting.
If you're interested in learning, you could also look at
Anzai, Y. (1984). Cognitive control of real-time event-driven systems.
Cognitive Science, 8, 221-254.
Anzai, Y. & Simon, H. A. (1979). The theory of learning by doing.
Psych. Review, Vol 86, 124-140.
Anderson, R. J. (1985). Cognitive psychology and its implications.
This has a chapter on expertise development that gives an overview
plus list of references.
Have fun, Rob Rist
------------------------------
Date: Thu, 13 Feb 86 09:44:18 est
From: rjk@mitre-bedford.ARPA (Ruben)
Subject: Response to "Thompson@umass-cs.csnet" re: "Expertize"
In lieu of replying to the apparently incorrect address
"Thompson@umass-cs.csnet", I send my tidbit to the AILIST.
From an abstract but empirically motivated view of the relationship
between expertise and category formation (a criterion useful for
discriminating relatively novice from expert behavior), I suggest
Eleanor Rosch's (U. of C. at Berkeley) work on prototypes.
A particularly good SUMMA is her article "Human Categorization,"
of which I read in draft form but which SHOULD (?) have been
published in ADVANCES IN CROSS-CULTURAL PSYCHOLOGY (Vol. 1),
M. Warren (ed.), Academy Press, London, circa 1976. I think
that her approach to categorization raises some intelligent and
persuable questions about the role of expertize in categorization:
this article is worth reading, even if it only touches on this question.
Rosch planned to do further research to follow up her questions viz.
expertize, but I have not yet seen it. (Let me know if you follow
this up.)
Ruben J. Kleiman rjk@MITRE-BEDFORD
[The address Thompson%UMASS-CS.CSNet@CSNet-Relay should work
(regardless of capitalization). The gateway requires that all
CSNet mail from the Arpanet be addressed to @CSNet-Relay, and that
all other @-signs be changed to %-signs. The .CSNet prior to the
@CSNet-Relay is sometimes optional. -- KIL]
------------------------------
Date: 8 Feb 86 00:35:57 GMT
From: decwrl!glacier!kestrel!ladkin@ucbvax.berkeley.edu
Subject: Re: Technology Review article
In article <15030@rochester.UUCP>, lab@rochester.UUCP (Lab Manager) writes:
> "In 3000 years, Philosophy has still not lived up to its promises and
> there's no reason to think it ever will."
An interesting comment. Whenever a problem is solved in Philosophy,
it spawns a whole new field of specialists, and is no longer called
Philosophy. Witness Physics, which used to be called Natural
Philosophy. When Newton took over, it gradually became a new
subject. Witness our own subject, which arose out of the
attempts of Frege to provide a formal foundation for mathematical
reasoning, via Russell, Church, Curry, Kleene, Turing and
von Neumann. Much work in natural language understanding arises
from the work of Montague, and more recently speech act theory
is being used, from Grice, Searle and Vanderveken.
The list goes on, and so do I. Would that AI bear such glorious
fruit. I think it might.
Peter Ladkin
------------------------------
Date: 9 Feb 86 16:05:00 GMT
From: pur-ee!uiucdcs!uiucuxc!bantz@ucbvax.berkeley.edu
Subject: Re: Technology Review article
Dreyfus's book "What Computers Can't Do" was a pretty sorry affair, insofar
as it purported to have a positive argument about intrinsic limits of
computers. However uncomfortable it makes the AI community feel, though,
the journalistic baiting with extensive quotations from the AI community
itself, ought to have demonstrated the virtues of a bit more humility than
is often shown. [I'm refering to his gleeful quotation of predictions that,
by 1970 or so a computer would be world chess champion, that fully literate
translations of natural languages would be routine...]
The responses here, so far, seem to be guilty of what Dreyfus is accused of:
failing to engage the opponent seriously, and relying on personal expressions
of distaste or ridicule. Specifically, Dreyfus does reject the typology of
learning in AI, on the not implausible grounds that it is self-serving, and
not obviously correct (or uniquely correct).
[Please! I am *not* a fan of Dreyfus, and do not endorse most of his claims.]
------------------------------
Date: Sun 16 Feb 86 22:33:41-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: In Support of the Dreyfi
I have now had a chance to read the Technology Review article (thanks to
a copy from Oscar Firschein). If it is a fair sample of Hubert and
Stuart Dreyfus' forthcoming book, Mind Over Machines, the book should
be required reading. Not that I necessarily agree with their
positions -- I see their points as problems to be solved rather than
proofs of futility -- but they have now solidified their stronger
arguments and (I presume) shed many of their weaker ones. I recently
read the introduction to the second edition of Hubert's What Computers
Can't Do and found myself disagreeing with about one item per page.
(To be fair, they or anyone else would find similar disagreement with
my [fuzzy] ideas if I had the ability and temerity to expose them in
writing.) I did not experience anywhere near the same density of
objections to this new article, Why Computers May Never Think Like People.
I am optimistic that we will be able to build "digital" intelligences
(with perhaps a few analog circuits thrown in as necessary), but I
cannot support my optimisim as well as they support their pessimism.
They are right that the AI "paradigms" of the past have proven weak
and inextensible, and that those of the present are also likely to
fail. (Five years hence, will not each researcher's proposals start
with "Previous work in this field has had limited success due to ...,
but our new approach will ...?) They are wrong to assume that the
logic-based symbol-processing paradigm is the only card AI holds.
(Sorry, guys, but I'm not a logic lover. Explicit definitions and
rules for commonsense reasoning are a useful exercise, but flexible --
and sometimes errorful -- intelligence will ultimately depend on a
patchwork of heuristics and analogies.) Many of the "feature vs
aspect" problems raised by the brothers are being faced by those of
us researching perception. Our results are sparse to date, but that
is no proof that pattern recognition and concept formation are
inherently human capabilities. Hubert and Stuart, as the loyal
opposition to past naivete, may help us to face and overcome the
true difficulties in real-world intelligence -- if they don't get
our funding killed first.
-- Ken Laws
------------------------------
End of AIList Digest
********************
∂20-Feb-86 1558 LAWS@SRI-AI.ARPA AIList Digest V4 #31
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 20 Feb 86 15:54:12 PST
Date: Thu 20 Feb 1986 10:40-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #31
To: AIList@SRI-AI
AIList Digest Thursday, 20 Feb 1986 Volume 4 : Issue 31
Today's Topics:
Seminars - Classical Conditioning and Contingency (SU) &
Learnability and the Vapnik-Chervonenkis Dimension (IBM-SJ) &
Hierarchical Reasoning, Simulation (UPenn) &
The Architecture of a Rational Agent (Edinburgh) &
Planning for Robotic Assembly Lines (USC) &
Distributed Problem Solving (USC) &
Adaptive Planning (MIT) &
Deductive and Relational Knowledge Bases (CCA),
Conference - Symbolics National Users Group Meeting
----------------------------------------------------------------------
Date: Wed, 19 Feb 86 18:34:28 pst
From: gluck@SU-PSYCH (Mark Gluck)
Subject: Seminar - Classical Conditioning and Contingency (SU)
The topic of this week's learning seminar will be on associative learning
in animals. We will examine classical conditioning, one of the simplest
and best studied forms of induction. The readings are:
Rescorla & Wagner (1972): Reviews the animal learning data and proposes
a simple linear model of associative learning which predicts
than animals will induce relative contingencies between
stimuli. The algorithm is formally equivalent to the
Widrow-Hoff predictor in adaptive systems and is a
special case of the delta rule used by the Rumelhart et
al. back-propogation algorithm.
The other two papers are two "Cognitive Science" models for classical
conditioning. The first, presented in the Holland et al book, is
a rule-based production system model of classical conditioning. The
second, by Sutton and Barto, is a connectionist/network model for
classical conditioning.
The seminar is in Building 360; Room 364 (near the geology corner).
On Thursday from 1:15-3pm.
------------------------------
Date: 19 Feb 86 14:53:44 PST
From: CALENDAR@IBM-SJ.ARPA
Subject: Seminar - Learnability and the Vapnik-Chervonenkis Dimension (IBM-SJ)
[Excerpted from the IBM Calendar by Laws@SRI-AI.]
IBM Almaden Research Center
650 Harry Road
San Jose, CA 95120-6099
Computer LEARNABILITY AND THE VAPNIK-CHERVONENKIS DIMENSION
Science D. Haussler, Department of Mathematics and
Seminar Computer Science, University of Denver
Fri., Feb. 28 The current emphasis on knowledge-based software has
10: 30 A.M. created a broader interest in algorithms that learn
B1-413 knowledge structures or concepts from positive and
negative examples. Using the learning model recently
proposed by Valiant, we have attempted to determine
which classes of concepts have efficient (i.e.,
polynomial time) learning algorithms. As noticed
earlier by Pearl and by Devroye and Wagner, a simple
combinatorial property of concept classes, the
Vapnik-Chervonenkis dimension, plays an important
role in learning and pattern recognition. We clarify
the relationship between this property and Valiant's
theory of learnability. Our results lead to the design
of efficient learning algorithms that employ a
variant of Occam's Razor. Illustrations are given
for certain classes of conjunctive concepts and for
concepts that are defined by various types of regions
in feature space. The work reported was done jointly
with Anselm Blumer, Andrzej Ehrenfeucht and
Manfred Warmuth of the Universities of Denver,
Colorado and California at Santa Cruz, respectively.
Host: B. Simons
[A BATS announcement said that the seminar would be at 11:00. - KIL]
------------------------------
Date: Mon, 17 Feb 86 00:56 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Hierarchical Reasoning, Simulation (UPenn)
Forwarded From: Paul Fishwick <Fishwick@UPenn> on Sun 16 Feb 1986 at 12:54
HIERARCHICAL REASONING:
SIMULATING COMPLEX PROCESSES
OVER MULTIPLE LEVELS OF ABSTRACTION
Paul A. Fishwick
University of Pennsylvania
Ph.D. Defense
This talk describes a method for simulating processes over multiple levels of
abstraction. There has been recent work with respect to data, object, and
problem-solving abstraction, however, abstraction in simulation has not been
adequately explored. We define a process as a hierarchy of distinct production
rule sets that interface to each other so that abstraction levels may be
bridged where desired. In this way, the process may be studied at abstraction
levels that are appropriate for the specific task: notions of qualitative and
quantitative reasoning are integrated to form a complete process description.
The advantages to such a description are increased control, computational
efficiency and selective reporting of simulation results. Within the framework
of hierarchical reasoning, we will concentrate on presenting the primary
concept of process abstraction.
A Common Lisp implementation of the hierarchical reasoning theory called HIRES
is presented. HIRES allows the user to reason in a hierarchical fashion by
relating certain facets of the simulation to levels of abstraction specified in
terms of actions, objects, reports, and time. The user is free to reason about
a process over multiple levels by weaving through the levels either manually or
via automatically controlled specifications. Capabilities exist in HIRES to
facilitate the creation of graph-based abstraction levels. For instance, the
analyst can create continuous system models (CSMP), petri net models, scripts,
or generic graph models that define the process model at a given level. We
present a four-level elevator system and a two-level "dining philosophers"
simulation. The dining philosophers simulation includes a 3-D animation of
human body models.
Time: Wednesday, February 26, 3pm
Place: Moore School, Room 554
Committee:
Dr. Norman Badler (Adviser)
Dr. Timothy Finin (Chairman)
Dr. Insup Lee
Dr. Richard Paul
------------------------------
Date: Tue, 18 Feb 86 17:55:35 GMT
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@cs.ucl.ac.uk>
Subject: Seminar - The Architecture of a Rational Agent (Edinburgh)
EDINBURGH AI SEMINARS
Date: Wednesday, 19th February l9986
Time: 2.00 p.m.
Place: Department of Artificial Intelligence
Seminar Room
Forrest Hill
EDINBURGH.
Dr. Robert C. Moore, Computer Laboratory, University of Cambridge
(visiting from SRI International) will give a seminar entitled -
"The Architecture of a Rational Agent".
The ultimate goal of artificial intelligence is to build complete,
autonomous, artificial rational agents. Most research in AI focuses
on one or another component of such an agent: the vision sybsystem,
reasoning subsystem, language subsystem, etc. Recently, however, some
attention has begun to be paid to the over-all architecture in which
these subsystems are combined. The first half of this talk will
discuss how concern for the architecture of rational agents is motivated
by the need to treat language as a form of rational action, and how
this view of language provides a formal framework for treating phenomena
that have been argued to be beyond the scope of formal analysis. In
the second half of the talk, we will compare the three component
belief/desire/intention model of rational agency typically used in AI
to the two component model cannot satisfactorily account for
cooperation among rational agents, proving a theorem to the effect that
there are situations in which there is no strategy for a group of
two-component agents that is rational by the normal standards of
decision theory.
------------------------------
Date: 18 Feb 1986 13:50-EST
From: gasser@usc-cse.arpa
Subject: Seminar - Planning for Robotic Assembly Lines (USC)
USC DPS GROUP MEETING
Wednesday, 2/26/86
3:00 - 5:00 PM
Seaver Science Bldg. 319
Dong Xia (Ph.D. Student, USC) will speak on "An Approach To Planning and
Scheduling for Robotic Assembly Lines"
While extensive studies have been devoted to general robot problem solving
and planning techniques in artifical world in recent years, the progress
towards their practical applications in robotic manufacturing floor has
severely prohibiited by the lack of sound understanding of the assembly
process and an adequate method to deal with real time uncertainties. In
this talk, we are going to address two of the most fundamental and
interrelated problems, namely task planning and temporal action scheduling.
We study these problems in the context of multiple cooperative robots with
assumed perceptual capabilities which work in a highly shared and dynamic
mechanical environment in a coordinated fashion for a common or different
goal(s). In this presentation, a general system architecture and a hybrid
knowledge representation scheme for a class of assembly lines is proposed
and some important design concepts and problems of robot-based intelligent
assembly lines are identified and discussed. Finally a particular prototype
system, called Miniassembler, is given, which exhibits our concepts and
methods to cope with temporal uncertainty.
Questions: Dr. Les Gasser, USC (213) 743-7794
or Dong Xia (XIA@USC-CSE.ARPA).
------------------------------
Date: 18 Feb 1986 15:34-PST
From: gasser@usc-cse.arpa
Subject: Seminar - Distributed Problem Solving (USC)
USC DPS GROUP MEETING
Wednesday, 2/19/86
3:00 - 4:00 PM
Seaver 319
Tom Hinke will speak on "Distributed Problem Solving and Architectural
Design".
The talk will cover some preliminary ideas about the application of
distributed problem solving techniques to the domain of computer aided
architectural design. The talk will include a brief overview of caad
work to date, a concept of how DPS could be applied to this area, and a
brief discussion of some of the anticipated problems in applying DPS to
design. The talk is based on very preliminary work in the area and
should be viewed as a forum to generate some initial comments and
direction for the bulk of the research which lies ahead.
Questions: Dr. Les Gasser, (213) 743-7794, or
Tom Hinke: HINKE@USC-CSE.ARPA
------------------------------
Date: Tue, 18 Feb 1986 17:08 EST
From: David Chapman <ZVONA%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Seminar - Adaptive Planning (MIT)
[Forwarded from the MIT bboard by SASW@MIT-MC.]
Wednesday, February 26 3:00pm Room: NE43- 8th floor Playroom
The Artificial Intelligence Lab
Seminar
"Adaptive Planning"
Richard Alterman
UC Berkeley
Consider the case where a planner intends to transfer airplanes. A
common-sense approach to the problem of transferring airplanes would
be to try to re-use an old existing plan: exit first airplane via
arrival gate, determine departure gate, walk to the departure gate,
and board second airplane via departure gate. In a small airport this
would work just fine. But in a larger airport, say Kennedy Airport
where there is more than one terminal, if the arrival and departure
gates were in different terminals, the plan would have to be modified
(i.e. the planner would have to take a shuttle between terminals).
The problem of adaptive planning is to refit old plans to novel
circumstances. In the case of the example above, an adaptive planner
would refit the old plan for transferring airplanes to the novel
circumstances at the Kennedy Airport. The importance of adaptive
planning is that it adds a dimension of flexibility to the
common-sense planner.
Key elements in the theory of adaptive planning are its treatment of
background knowledge and the introduction of a notion of planning by
situation matching. The talk will motivate and discuss four kinds of
background knowledge. It will describe a number of kinds of situation
difference that can occur between an old plan and the new planning
situation. It will discuss situation matching techniques that are
based on the interaction of the planner's current circumstances and
its background knowledge. An important theme throughout this
discussion will be the control of access to knowledge.
------------------------------
Date: Tue 18 Feb 86 15:40:17-EST
From: Sunil Sarin <SKS@XX.LCS.MIT.EDU>
Subject: Seminars - Deductive and Relational Knowledge Bases (CCA)
[Forwarded from the MIT bboard by SASW@MIT-MC.]
CCA Colloquium Series
DATE: February 20, 1986-- Thursday
TIME: 10:00-11:00 a.m.
PLACE: 4th floor large conference room, Four Cambridge Center
TITLE: Deductive Databases and a Relational Knowledge Base
A Survey of Work at ICOT, Japan
SPEAKERS: Haruo Yokota and Masaki Murakami (Institute for New
Generation Computer Technology (ICOT--Japan) )
CCA (Computer Corporation of America) is located at Four Cambridge
Center, which is on Broadway, behind Legal Seafood. Tell the
security desk you are visiting CCA and they will send you up to
CCA on the 5th floor. Tell CCA's receptionist to call Barbara
Wong who will show you where the seminar is. (If you can't
remember that, simply say you're here for the colloquium.)
Abstracts of works to be covered:
1. Deductive Database System based on Unit Resolution
by Haruo Yokota, Ko Sakai, Hidenori Itoh
This paper presents a methodology for constructing a deductive
database system consisting of an intensional processor and a
relational database management system. A setting evaluation
is introduced. The intensional processor derives a setting
from the intensional database and a given goal and sends the
setting and the relationship between setting elements to the
management system. The management system performs a unit
resolution with setting using relational operations for the
extensional databases. An extended least fixed point operation
is introduced to terminate all types of recursive queries.
2. A Model and an Architecture for a Relational Knowledge Base
by Hauro Yokota, Hidenori Itoh
A relational knowledge base model and an architecture which
manipulates the model are presented. An item stored in the
relational knowledge base is a term, and it is retrieved by
unification operation between the terms. The relational
knowledge base architecture we propose consists of a number
of unification engines, several disk systems, a control processor,
and a multiport page-memory. The system has a knowledge compiler
to support a variety of knowledge representations.
3. Formal Semantics of a Relational Knowledge Base
by Masaki Murakami, Hauro Yokota, Hidenori Itoh
A mathematical foundations for formal semantics of term relations
[Yokota et al. 85] is presented. A term relation is a basic data
structure of a relational knowledge base. It is an enhanced version
of relational model in a database theory. It may include syntactically
complex structures such as terms or literals containing variables as
items of relations. The items are retrieved with operations called
retrieval-by-unification. We introduce as a semantic domain of
n-ary-term relations n←T←RELATIONS and define a partial order on them.
We characterize retrieval-by-unification as operations on n←T←RELATIONS
with monotone functions and greatest lower bounds.
------------------------------
Date: Wed, 19 Feb 86 13:50:19 pst
From: grover@aids-unix (Mark Grover)
Subject: Conference - Symbolics National Users Group Meeting
[Submitted to AILIST because: 1) it is an initial announcement. Followup will
take place via the address provided. 2) Symbolics computers are a major tool
of AI researchers. 3) The majority of work on Symbolics computers is related
to AI. 4) Users are widespread: well over 100 sites and 1500 machines.]
Are you getting bored with TV:MENU-CHOOSE?
Do you know your FOSS from your CSE?
Are you ready for Release 7 and Common Lisp?
No matter what your answers, you are invited to the
Second Annual
SYMBOLICS NATIONAL USERS GROUP MEETING
(SNUG86)
Georgetown University Campus
Washington, DC
June 2-6, 1986
(organized by the Capital Area SLUG)
with...
Speakers Poster Sessions War Stories
Panels Discussions Tutorials
Debates Wizards BOFs
The SLUG National Board has approved plans from the Capital Area SLUG to hold
a five-day National Symposium (SNUG86). This year's Symposium will consist
of three days of meetings, preceeded by two days of special Symbolics
Educational Services Tutorials at a small additional cost per session.
Planning is well underway to build on the experience of last year's National
SLUG Symposium in San Francisco. This year we hope for an even more exciting
gathering at the beautiful Georgetown University campus on the Potomac River
to discuss, debate and learn the best in Lisp Machine techniques.
This year's theme:
"Programming in Style on the Symbolics"
The goal of this year's Symposium is to make explicit the experience of
long-time users in terms of programming style. There are so many ways of
achieving a particular function, but which are the most efficient, elegant
and consistent? This Symposium is a means to share such important
information, where common needs and individual problems can be addressed.
Registration costs (separate from tutorials) will be considerably less than
comparable meetings. Inexpensive campus housing will be available. A
detailed announcement will be forthcoming.
RESPONSE DEADLINE IS MARCH 28, 1986
It is essential that the Symposium planning committee hear from you in order
to gauge interest. To receive future announcements, you must fill in a
response form BY MARCH 28 to the mailing address below.
We also also invite program suggestions. Please address program-related
correspondence to ATTN: Programs, or via ARPAnet mail to the Program Chair,
Mark Grover (Advanced Decision Systems), at GROVER@AIDS-UNIX.ARPA (or
Grover@AIDS-DC.Dialnet.Symbolics.COM). This address is for technical program
session proposals only! Questions regarding registration, facilities and
exhibits should be directed to address and phone below.
Planned Activities
Monday and Tuesday: Tutorials taught by Symbolics personnel to include
Introduction to Lisp Machine Programming, Site Maintenance, Common Lisp and
advanced topics. Tuesday evening: Third Party Vendor Hospitality suites.
Wednesday: Keynote presentations, concentrating on Release 7 and SLUG
activities such as the national library.
Thursday and Friday: program sessions to include Windows and Processes;
Flavors; Of Mice and Menus; Large Scale Data Management; Networking; File
Storage for Lisp Objects; Group Programming Etiquette; Security; and
Personalizing Your Environment. Many other topics are under consideration.
Please make additional suggestions of session proposals on the form below.
Poster sessions will be held in parallel with the program sessions. A poster
session allows a user to display charts and code on a fixed display in shared
quarters while interested attendees are free to move about, listen to and
discuss these informal talks. Further, there will be free time for Birds of
a Feather (BOF) gatherings. We hope to provide some Lisp Machine time for
these sessions.
This is a USER-oriented meeting. The informal availability of Symbolics
"wizards" was a significant attraction of last year's Symposium which will be
repeated.
Conference Location
Located near the Potomac River and Rock Creek Park, the Georgetown area of
Washington DC is well-known for its many shops and restaurants. Georgetown
University provides excellent meeting facilities and inexpensive
accommodations. The many monuments and museums of Washington are within short
rides via bus or metro.
SNUG86 MAILING LIST
(Mail this form to the address below. No ARPA mail please).
Ms. Annmarie Pittman
SNUG86
655 15th Street NW #300
Washington, DC 20005
(202) 639-4228
First Name: Last Name:
Title:
Organization:
Address:
City: State: Zip Code:
Telephone:
←←←←← Please add me to the mailing list.
←←←←← I plan to attend SNUG86.
←←←←← I would be interested in attending Symbolics Education Services
one-day tutorials on the subjects(s):
←←←←← I would like to propose sessions on the subject(s):
←←←←← I would be interested in giving a poster session on the topic:
←←←←← My company is interested in exhibiting at the conference. Please
send exhibit materials.
------------------------------
End of AIList Digest
********************
∂21-Feb-86 0101 LAWS@SRI-AI.ARPA AIList Digest V4 #32
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Feb 86 01:00:40 PST
Date: Thu 20 Feb 1986 22:49-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #32
To: AIList@SRI-AI
AIList Digest Friday, 21 Feb 1986 Volume 4 : Issue 32
Today's Topics:
Queries - AI Teaching/Tutoring Package &
Expert Systems and Software Engineering &
Micro-PROLOG & 68k Unix LISP & NL Dialogue,
Literature - LISP Texts & Logo & MRS,
Methodology - Taxonomizing
----------------------------------------------------------------------
Date: Wed, 12 Feb 86 15:37:28 EST
From: munnari!goanna.oz!ysy@seismo.CSS.GOV (yoke shim YAP)
Subject: AI Teaching/Tutoring Package
Recently, I heard that a teaching / tutoring package has been
written using Artificial Intelligence techniques. It seems that
this piece of information appeared in an article. Has anyone
read or heard anything about this article? I would like to
get hold of this article and if possible, contact the author
of this package.
Y. S. YAP
Dept. of Computing
Faculty of Applied Science
RMIT ...or munnari!goanna.oz!ysy@SEISMO.ARPA
GPO BOX 2476V
Melbourne VIC. 3001
AUSTRALIA
------------------------------
Date: Mon, 17 Feb 86 14:18:11 -0100
From: Jorun Eggen <j←eggen%vax.runit.unit.uninett@nta-vax.arpa>
Subject: Expert Systems and Software Engineering
Hello out there!
Can anyone give me references to work carried out in order to see what
theory, methodologies and tools from Software Engineering can do to assist
the process of building expert systems? Or to put it another way: Is Knowledge
Acquisition today at the same level as Software Engineering was 20 years ago?
If the answer is yes, what can we learn from Software Engineering to help us
to provide reinventing the wheel and instead consentrate on the new unsolved
problems?
References to articles, reports, people, books etc. are welcome.
Thanks a lot, and be aware that my net-address "uninett" is spelled with
double t.
Jorun Eggen
RUNIT/SINTEF
N-7034 Trondheim-NTH
NORWAY
------------------------------
Date: 13 Feb 86 16:50:56 GMT
From: amdcad!lll-crg!gymble!umcp-cs!deba@ucbvax.berkeley.edu (Deba Patnaik)
Subject: micro-PROLOG info wanted
I am thinking of purchasing "micro-PROLOG". I would like to know
Price, comments and who distributes it.
Are there any other PROLOG ( Interepreters or Compilers ) ?
deba@maryland.arpa
deba%umdc.bitnet@wiscvm.arpa
------------------------------
Date: Mon, 17 Feb 86 16:00:16 -0500
From: johnson <johnson@dewey.udel.EDU>
Subject: seeking lisp for 68k unix world
Computer Logic, Inc., is seeking a license for an efficient lisp
running on 680XX-based Unix systems.
We are looking for an implementation of lisp that meets the
following criteria:
. source code is available
. runs on 68k-based UNIX machines
. allows loading of modules written in C, or other system-level language
. small and fast (even at the expense of advanced features)
. one-time license available, or nominal run-time environment royalties
. floating-point and integer arithmetic (arbitrary precision is NOT required)
. lisp "impurities" such as: setq, rplaca, rplacd
If you know of any lisp that meets these criteria, please pass us a pointer
to its author.
If YOU own an implementation of lisp, and would like to SELL it to us,
please send us:
. a description of your lisp, including:
. a list of the primitive functions
. the hardware/software requirements for a run-time system
. the hardware/software requirements for building your system from
source code
. some indication of the hard and soft limits of your system
(w/r/t maximum number of objects, number of symbols,
number of numbers, etc.)
. a brief description of any special features that you feel
would expedite software development in your lisp,
{editors, compilers, structured-objects, environment-dumps}
. how many times can you perform (T1 2000) without garbage collection on
a machine with 1048576 bytes of available memory?
(please extrapolate or interpolate from tests run on whatever
machine is available to you; be sure tell us the way that you arrived
at your figure)
when the garbage collection does occur, how long does it take?
. how long does (T2 20) take?
. if your lisp has an iterative construct (do, loop, or prog with goto)
how long does it take to perform (T3 5000)?
Feel free to modify these functions syntactically to allow them to run
in your version of lisp, but please include the modified versions along
with your results. (ps: these functions will run unmodified
in muLISP-85)
Most unix systems provide a means to measure the elapsed time allocated
to a given process (try "man time" on your system). Please give your
times in terms of this quantity. If no such facility is available, be
sure to indicate the conditions under which you ran the benchmark.
(DEFUN T1
(LAMBDA (N)
(COND ((> N 1) (LIST N (T1 (- N 1))))
(T (LIST 1)))))
(DEFUN T2
(LAMBDA (N)
(COND ((< N 2) 1)
(T (+ (T2 (- N 1)) (T2 (- N 2)))))))
(DEFUN T3
(LAMBDA (N)
(LOOP (IF (= N 0) (RETURN)) (SETQ N (- N 1)))))
Please send all description responses to:
Apperson H. Johnson
Computer Logic Inc.
2700 Philadelphia Pike, P.O. Box 9640
Wilmington, De. 19809
{johnson@udel will read any pointers}
------------------------------
Date: 13 Feb 86 11:34:13 GMT
From: mcvax!ukc!cstvax!hwcs!aimmi!george@SEISMO (George Weir)
Subject: Dialogue help please needed ?
I am currently am working on Dialogue Management Systems, with Natural
Language Understanding in it. Despite weeks of effort (including
Saturdays), I find my system is still unable to handle it with several
forms of natural expression.
Please help to cure my depressions : if you have a system working which
manages dialogue in of course natural language (complete with efficient
interpreter/complier), and it's able to cope with all known syntactic forms,
as well as most semantics, please send me a copy, or post it to this news
group.
I prefer a system which works in English but Norwegian would do it.
thanks you,
Ingy
P.S. Doesn't matter if your documentation isn't up to IEEE standards, if
they are close to it.
------------------------------
Date: Thu, 13 Feb 86 11:12:03 pst
From: sdcsvax!uw-beaver!ssc-vax!bcsaic!pamp@ucbvax.berkeley.edu
Subject: Re: request for LISP source code
In article <8602031844.AA28255@ucbvax.berkeley.edu> you write:
> I am teaching an AI course for the continuing education program at
>St. Mary's College in Southern Maryland. This is my first time teaching
>LISP and I would appreciate access to the source code for "project-
>sized" LISP programs or any other teaching aids or material. We are
>using the 2nd edition of both Winston's AI and Winston&Horne's LISP.
>I hate to ask for help, but we are pretty far from mainstream AI
>down here and my students and I all have full time jobs so any help we
>can get from the professional AI community would be greatly
>appreciated by all of us.
>
> Bob Woodruff
> Veda@paxrv-nes.arpa
I'd like to make a recommendation in additional texts. We have found
Winston&Horn to be a bit irritating to work with, especially since
the problems and answers are either too vaguely stated or filled with
bugs. Two other books that we have found to handle LISP more adaquately
are:
Touretzky,David S.,1984,Lisp - A gentle introduction to
symbolic computation;Harper & Row , New York, 384p.
-- A good intro text for those who have no
experience in symbolic processing (generally, most
conventional programmers). Gives a good covering of
the basic principles behind LISP.
Wilensky,Robert,1984,LISPcraft,W.W.Norton & Company,New York
385p.
--Covers programming techniques and LISP philosophy
over different dialects quite well.
One thing that has helped with the training around the AI center
here is to take the time to give a little of the history of
LISP, where and why the different dialects have developed, and
a little of history of hardware currently in use. A short time
spent on relations to PROLOG couldn't hurt. (A good short article
of LISP and PROLOG history is:
Tello,Ernie,April 16,1985, The Languages of AI research,
PC Magazine,v.4,no.8,p.173-189.)
Hope this helps.
P.M.Pincha-Wagener
------------------------------
Date: 14 Feb 86 22:11:11 GMT
From: decvax!cwruecmp!leon@ucbvax.berkeley.edu (Leon Sterling)
Subject: Re: Pointers to Logo?
The AI department at the University of Edinburgh used to teach its
undergraduate courses in AI using Logo several years ago.
The lecture notes appear as a book called
Artificial Intelligence, published (I think) by Edinburgh University
Press, the editor is Alan Bundy.
------------------------------
Date: Wed, 19 Feb 86 07:57:07 PST
From: Curtis L. Goodhart <goodhart%cod@nosc.ARPA>
Subject: MRS
There was a recent question about what MRS stands for. According to
"The Compleat Guide to MRS" by Stuart Russell Esq., Stanford University
Knowledge Systems Laboratory Report No. KSL-85-12, page 2, "MRS stands for
Meta-level Representation System". In the preface on page i MRS is
described briefly as "a logic programming system with extensive meta-level
facilities. As such it can be used to impement virtually all kinds of
artificial intelligence applications in a wide variety of architectures."
Curt Goodhart (goodhart@nosc ... on the arpanet))
------------------------------
Date: Thu, 13 Feb 86 10:13 EST
From: Seth Steinberg <sas@BBN-VAX.ARPA>
Subject: Re: Taxonomizing in AI and Dumplings
Building a taxonomy is a means of predicting what will be found. Anyone
who has read any of Steve Gould's columns in Natural History will be
quite familiar with this problem. When Linnaeus devised the modern
biological taxonomy of the plant kingdom he was criticized for his
heavy emphasis on the sex lives of the flowers. He was considered
crude and salacious. He worked in a hurry to preempt any competetive
scheme and avoid a split in the field but his choice was prophetic and
his emphasis on sex was vindicated by Darwin's later work which argued
that sex was both essential to selection (no sex, no children) AND to
the origin and maintenance of the species.
Of course for every "good" taxonomy there are dozens of losers. Take
the old earth, air, fire and water taxonomy with its metaphoric power.
It still works; look in the Science Fiction and Fantasy section of your
local bookstore. Of course chemists and physicists use Medeleev's
taxonomy of the elements which has much better predictive power. There
is nothing wrong with building these structures as long as they can be
used to predict or explain something. Breaking up LISP programs into
families based on the number of parentheses has only limited predictive
power.
Building a taxonomy is no more or less than constructing a theory and
building a theory is useful because it gives people an idea of what to
look for. A sterile taxonomy is not particularly useful. That is the
positive side. A theory also tells people what to ignore and biology
is full of overlooked clues, all carefully noted and explained, waiting
to be illuminated by a new theory.
I think the debate going on now is typical in any young field. If we
had a theory we could use it to march rapidly along its path, much like
an Interstate highway. Even if we find it doesn't get us where we want
to go, we had a smooth pleasant ride. Witness classical
electrodynamics, its collapse and the advent of quantum theory. The
justifiable fear is that we will race past our exit and exclude or
ignore crucial signs which indicate the correct path.
Personally I think that it is time to set up a few theories of AI so
that we can have the fun of knocking them down. As one might expect
most theories at this stage are either useless and lack predictive
power (except possibly for predicting tenure) or are so weak and full
of holes that you can drive a truck full of LISP machines through them.
When people start developing theories with real predictive power that
are really hard to knock down then we can relax a bit.
Seth Steinberg
P.S. This month's Scientific American had an article on quantum effects
in biological reactions at low temperatures and the author argues that
conformational resonances (which determine reactivities) are driven by
quantum tunneling! Maybe there ARE carcinogenic vibrations!
------------------------------
Date: 12 Feb 86 17:02:35 GMT
From: hplabs!utah-cs!shebs@ucbvax.berkeley.edu (Stanley Shebs)
Subject: Re: taxonomizing in AI: useless, harmful
In article <3600038@iuvax.UUCP> marek@iuvax.UUCP writes:
>... Taxonomizing is a debatable art of empirical
>science, usually justified when a scientist finds itself overwhelmed with
>gobs and gobs of identifiable specimens, e.g. entymology. But AI is not
>overwhelmed by gobs and gobs of tangible singulars; it is a constructive
>endeavor that puts up putatative mechanisms to be replaced by others. The
>kinds of learning Michalski so effortlessly plucks out of the thin air are not
>as incontrovertibly real and graspable as instances of dead bugs.
Now I'm confused! Were you criticizing Michalski et al's taxonomy of
learning techniques in pp. 7-13 of "Machine Learning", or the "conceptual
clustering" work that he has done? I think both are valid - the first
is basically a reader's guide to help sort out the strengths and limitations
of dozens of different lines of research. I certainly doubt (and hope)
no one takes that sort of thing as gospel.
For those folks not familiar with conceptual clustering, I can characterize
it as an outgrowth of statistical clustering methods, but which uses a
sort of Occam's razor heuristic to decide what the valid clusters are.
That is, conceptual "simplicity" dictates where the clusters lie. As an
example, consider a collection of data points which lie on several
intersecting lines. If the data points you have come in bunches at
certain places along the lines, statistical analysis will fail dramatically;
it will see the bunches and miss the lines. Conceptual clustering will
find the lines, because they are a better explanation conceptually than are
random bunches. (In reality, clustering happens on logical terms in
a form of truth table; I don't think they've tried to supplant statisticians
yet!)
>Please consider whether taxonomizing kinds of learning from the AI perspective
>in 1981 is at all analogous to chemists' and biologists' "right to study the
>objects whose behavior is ultimately described in terms of physics." If so,
>when is the last time you saw a biology/chemistry text titled "Cellular
>Resonance" in which 3 authors offered an exhaustive table of carcinogenic
>vibrations, offered as a collection of current papers in oncology?...
Hmmm, this does sound like a veiled reference to "Machine Learning"!
Personally, I prefer a collection of different viewpoints over someone's
densely written tome on the ultimate answer to all the problems of AI...
>More constructively, I am in the process of developing an abstract machine.
>I think that developing abstract machines is more in the line of my work as
>an AI worker than postulating arbitrary taxonomies where there's neither need
>for them nor raw material.
>
> -- Marek Lugowski
I detect a hint of a suggestion that "abstract machines" are Very Important
Work in AI. I am perhaps defensive about taxonomies because part of my
own work involves taxonomies of programming languages and implementations,
not as an end in itself, but as a route to understanding. And of course
it's also Very Important Work... :-)
stan shebs
------------------------------
End of AIList Digest
********************
∂21-Feb-86 1323 LAWS@SRI-AI.ARPA AIList Digest V4 #33
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Feb 86 13:21:46 PST
Date: Fri 21 Feb 1986 09:29-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #33
To: AIList@SRI-AI
AIList Digest Friday, 21 Feb 1986 Volume 4 : Issue 33
Today's Topics:
Literature - New CSLI Reports & Indiana U. CS TR #176,
Reviews - SI Interactions, 2/86 & Applied Intelligence 12/85,
History - Airline Reservation Systems,
Machine Learning - Hopfield Networks,
Methodology - Dreyfus' Technology Review Article
----------------------------------------------------------------------
Date: Wed 19 Feb 86 17:20:04-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: New CSLI Reports
Report No. CSLI-85-34, ``Applicability of Indexed Grammars to
Natural Languages'' by Gerald Gazdar, Report No. CSLI-85-39, ``The
Structures of Discourse Structure'' by Barbara Grosz and Candace L.
Sidner, and Report No. CSLI-85-44, ``Language, Mind, and Information''
by John Perry, have just been published. These reports may be
obtained by writing to Trudy Vizmanos, CSLI, Ventura Hall, Stanford,
CA 94305 or Trudy@SU-CSLI.
------------------------------
Date: 14 Feb 86 16:12:00 GMT
From: ihnp4!inuxc!iubugs!iuvax!marek@ucbvax.berkeley.edu
Subject: Indiana U. CS TR #176
Due to conditions of poverty, the Indiana University Computer Department
is henceforth unable to supply free copies my technical report (#176) titled
"Why Artificial Intelligence Is Necessarily Ad Hoc: One's Thinking/Approach/
Model/Solution Rides on One's Metaphors". The volume of requests has simply
outstripped our financial resources. However, a modest bribe of $2.00 will
suffice to propagate the item to you. More substantial unrestricted grants
from corporate, philanthropic or governmental sources are always welcome.
Please make your bribes PAYABLE TO Indiana University Foundation, but do
continue to ADDRESS REQUESTS for our TRs TO Nancy Garrett, Computer Science
Department, Lindley Hall 101, Bloomington, Indiana 47405. You could let Nancy
know in advance that you're sending money for one: nlg@iuvax.uucp or
nlg@indiana.csnet.
As the saing goes, sorry for the inconvenience but that's the breaks. Several
people got the TR for free, but no more. Perhaps it should be noted that any
run on IU tech reports will generate a bribe request proportional to the
length of the item. TR #176 has 52 pages.
-- Marek Lugowski
Indiana University Computer Science
Bloomington, Indiana 47405
marek@indiana.csnet
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Review - SI Interactions, 2/86
Summary of
AI Interactions, Volume 1, Number 7, February 1986
Texas Instruments has invested more money in AI research than the Japanese
in their Fifth Generation Project. The Computer Systems Laboratory is
working to design computers with several different types of processors
on the same bus or chip, e. g. array procesors, graphics processors and
symbolic processors. They also developing an architectural concept called
Odyssey which combines multiple digital signal processing chips on a single
NuBus board.
At the Purdue University in West Lafayette, they have developed
an expert sytem that assists farmers in determining the best way to market
their prodcut. It has 180 rules with the prototype done in three months.
Discussion of the features of Personal Consultant Plus. It includes
frames, meta-rules and mapping functions. Also discusses the use of contexts.
Texas Instruments has announced Relational Table Management System, a
database system for the Explorer. It interfaces with the Lisp environment.
A domain can store any type of object including graphics, pointers,
lists, relation names or large amounts of text. It interfaces with
Natural Language Menu, a graphics tool kit,PROLOG.
Texas Instruments has developed an expert system to assist pilots
in the F-16. The Defense Department awarded TI 3 million dollars to
develop a similar system for attack helicopters. The F-16 system
handles two specific problems, towershaft failure and loss of canopy.
Towershaft is the mechanism by which the F-16 jet engine provides
power to other aircraft systems.
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Review - Applied Intelligence 12/85
Summary of
Applied Intelligence, Volume 2 Number 4 December 1985
At the recent Instrumentation Society of America show in
Philadelphia, four major vendors announced their intent to offer
PICON, Lisp Machine Incorporated's real-time expert system, to
their customers. Leeds and Northrup demonstrated the system in
conjunction with their MAX 1 process control system. PICON is
running at six customer sites. A large chemical processing company
is using PICON in control The knowledge engineering was done by
a process engineer who developed a 350-frame knowledge base in
a period of two months. Oak Ridge International bought PICON for
robotics and Lockheed bought it for CAD applications.
PICON has been installed at the Texaco chemical plant in Port Arthur
where it monitors several processes. It interfaces to a Honeywell
TDC-2000 process control system. Pete Thompson is the Manager of
Artificial Intelligence at Texaco's Computer and Information Systems
Department.
Lisp Machine Incorporated also announces the availability of ObjectLisp,
a second generation approach to object-oriented programming. It directly
invokes local functions within the context of the object and releases
the programmer from having to define message-passing structures.
ObjectLISP allows both object variables and object functions to
be either created or deleted interactively without requiring recompilation.
MCC has made its sixth order for Lambda hardware from LMI.
------------------------------
Date: Wed, 19 Feb 86 23:47:40 est
From: decvax!utzoo!dciem!mmt@ucbvax.berkeley.edu
Subject: Airline Reservation Systems
> Date: 23-Jan-86 12:52:19-PST
> From: jbn at FORD-WDL1
> ... Contrast this with Minksy's recent claims seen here that airline
> reservation systems were invented by someone at the MIT AI lab in the
> 1960s.
>
>I decided to take a close look at this contrast. After searching through
>the recent archives, the only mention by Minsky of airline reservation
>systems that I can find is:
>
> And I'm pretty sure that the first practical airline reservation was
> designed by Danny Bobrow of the BBN AI group around 1966.!
>
>Now that I have refreshed my memory with what he actually said, I think the
>contrast is not quite as unflattering. Given the use of the adjective
>``practical'', someone might even be able to make a case that he is right.
The case would not be watertight. Air Canada was using a reservation
system developed at Ferranti Electric Inc., (a Toronto-based firm not
to be confused with Ferranti in UK), running on a redundant computer
system called Gemini, from 1961 for about 10 years until it was replaced.
It did all the things one associates with computerized reservation systems,
and was used by reservation clerks to deal with the public, so I guess
you could call it "practical."
Incidentally, this system led to the development of what may be the
first fully commercial time-sharing computer system (I mean memory-protected,
independent multi-user multitasking), the FP-6000, which was first
delivered around the end of 1962 or the beginning of 1963. The design
for that machine formed the basis of the ICL 1900 series in the UK.
It, like the airline reservations system, was a totally Canadian design
(if you will forgive the chauvinism).
Martin Taylor
Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt
------------------------------
Date: 15 Feb 86 05:32:32 GMT
From: sdcsvax!elman@ucbvax.berkeley.edu (Jeff Elman)
Subject: Re: Hopfield Networks?
In article <5413@mordor.UUCP>, ehj@mordor.UUCP (Eric H Jensen) writes:
> In article <1960@peora.UUCP> jer@peora.UUCP (J. Eric Roskos) writes:
> >In a recent issue (Issue 367) of EE Times, there is an article titled
> >"Neural Research Yields Computer that can Learn". This describes a
> >simulation of a machine that uses a "Hopfield Network"; from the ...
>
> I got the impression that this work is just perceptrons revisited.
> All this business about threshold logic with weighting functions on
> the inputs adjusted by feedback (i.e. the child reading) ...
This refers to some work by Terry Sejnowski, in which he uses a method
developed by Dave Rumelhart (U.C. San Diego), Geoff Hinton (CMU), and Ron
Williams (UCSD) for automatic adjustment of weights on connections between
perceptron-like elements. Sejnowski applied the technique to
a system which automatically learned text-to-phoneme correspondances
and was able to take text input and then drive a synthesizer.
The current work being done by Rumelhart and his colleagues certainly
builds on the early perceptron work. However, they have managed to
overcome one of the basic deficiencies of the perceptron. While perceptron
systems have a simple learning procedure, this procedure only worked
for simple 2-layer networks, and such networks had limited power (they
could not recognize XOR patterns, for instance). More complex multi-layer
networks were more powerful, but -- until recently -- there has been
no simply way for these systems to automatically learn how to adjust
weights on connections between elements.
Rumelhart has solved this problem, and has discovered a generalized
form of the perceptron convergence procedure which applies to networks
of arbitrary depth. He and his colleagues have explored this technique in
a number of interesting simulations, and it appears to have a tremendous
amount of power. More information is available from Rumelhart
(der@ics.ucsd.edu or der@nprdc.arpa), or in a technical report "Learning
Internal Representations by Error Propagation" (Rumelhart, Hinton, Williams),
available from the Institute for Cognitive Science, U.C. San Diego,
La Jolla, CA 92093.
Jeff Elman
Phonetics Lab, UCSD
elman@amos.ling.ucsd.edu / ...ucbvax!sdcsvax!sdamos!elman
------------------------------
Date: 13 Feb 86 21:30:45 GMT
From: decwrl!glacier!kestrel!ladkin@ucbvax.berkeley.edu (Peter Ladkin)
Subject: Re: "self-styled philosophers"
In article <3189@umcp-cs.UUCP>, mark@umcp-cs.UUCP (Mark Weiser) writes:
> A recent posting called the Dreyfus' "self-styled philosophers". This
> is unfair, since Hubert Dreyfus is also styled a philosopher by many another
> philosopher in the area of phenomenology.
Agreed. He is also a professional philosopher, holding a chair at
U.C. Berkeley. His criticisms of AI claims are thoroughly thought
through, with a rigor that a potential critic of his views would
do well to emulate. He has done AI great service by forcing
practitioners to be more self-critical. AAAI should award him
distinguished membership!
His main thesis is that there are certain human qualities and
attributes, for example certain emotions, that are just not the
kinds of things that are amenable to mechanical mimicry. This
general claim seems unexceptional. His examples may not
always be the most appropriate for his claims, some of
his arguments seem to be incorrect, and, since he isn't a
practicing computer scientist, his knowledge of current research
is lacking. But it is intellectual sloppiness to deride him
without addressing his arguments.
There is, however, a political component to the discussion.
He believes he is able to show that certain types of research
cannot justify the claims they make on the basis of which they
are funded. He may be right in some of these cases. This is
clearly a sensitive issue, which muddies the intellectual
waters. Both sides would do well to separate the issues.
Peter Ladkin
------------------------------
Date: Tue, 11 Feb 86 03:48:30 PST
From: ucdavis!lll-crg!amdcad!amd!hplabs!fortune!redwood!rpw3@ucbvax
.berkeley.edu (Rob Warnock)
Subject: Re: Technology Review article
+
| The [Technology Review] article was written by the Dreyfuss brothers, who ...
| claim... that people do not learn to ride a bike by being told how to do it,
| but by a trial and error method that isn't represented symbolically.
+
Hmmm... Something for these guys to look at is Seymour Papert's work
in teaching such skills as bicycle riding, juggling, etc. by *verbal*
and *written* means. That's not to say that some trial-and-error
practice is not needed, but that there is a lot more that can be done
analytically than is commonly assumed. Papert has spent a lot of time
looking at how children learn certain physical skills, and has broken
those skills down into basic actions, "subroutines", and so forth.
After reading his book "Mindstorms", I picked up three apples and, following
the directions in the book, taught myself to juggle (3 things, not 4-"n") with
only a few minutes practice. Particularly useful were his warnings of which
errors were associated with which levels of the subroutine hierarchy. (Oddly
enough, most errors in the overall performance come not from the coordination
of the three balls, but from not mastering the most basic skill, throwing-
and-catching a single ball. The most serious mistake here is looking at the
balls at any points in the trajectory *other* than at the very top.)
So... there is at least SOME hint that the difference between "knowledge"
and "skills" is not as vast as we normally assume, *if* the "skills" are
analyzed properly with a view to learning.
Rob Warnock
Systems Architecture Consultant
UUCP: {ihnp4,ucbvax!dual}!fortune!redwood!rpw3
DDD: (415)572-2607
USPS: 627 26th Ave, San Mateo, CA 94403
------------------------------
Date: 16 Feb 86 23:44:45 GMT
From: decvax!linus!philabs!dpb@ucbvax.berkeley.edu (Paul Benjamin)
Subject: Re: Re: "self-styled philosophers"
> In article <3189@umcp-cs.UUCP>, mark@umcp-cs.UUCP (Mark Weiser) writes:
> > A recent posting called the Dreyfus' "self-styled philosophers". This
> > is unfair, ...
>
> Agreed. He is also a professional philosopher, ...
Baloney. His views show a total lack of understanding of science,
together with an inability to perform useful work relating to science.
For example, in his recent article, he recounts an "experiment"
he conducted to show that chessplayers do not use reasoning very
much, but just play instinctively. This experiment consisted of
an International Master playing against a weaker player. The IM
was forced to add a sequence of numbers while playing, thus
supposedly occupying his reasoning capability. The IM won anyway,
thus supposedly showing that chess is not primarily a reasoning
venture, or more precisely, that the difference between being a
master and just very good is not due to superior reasoning.
But wait a minute! How does this qualify as an experiment? Where
is the control group? Did he have the IM play a number of players,
sometimes having to add, sometimes not, and compare their results?
NO. Did he vary the distracting task, in case addition was not
demanding enough? NO.
In short, this experiment means nothing, since the IM may well have
played worse than he would have without having to add, but won
anyway. This type of "evidence" is constantly cited by Dreyfus to
support his views, but it's meaningless, due to his inability to
perform good work.
Also, he remarks that he and his brother have both failed to improve
to a master level in chess, and somehow uses this to support his
views, too! His basic argument is that if reasoning is so important,
then he should be able to make master, implying that he is a good
reasoner! It obviously has never occurred to him to ask someone
who is a master if reasoning is important to him. I am a USCF master,
and can guarantee that improving my reasoning capability has raised
my rating (over 300 points in the last few years). It seems arrogant
for him to reach conclusions about fields in which he is not
accomplished. This applies to both chess and AI.
Paul Benjamin
------------------------------
Date: 17 Feb 86 15:57:27 GMT
From: nike!topaz!harvard!bu-cs!bzs@ucbvax.berkeley.edu (Barry Shein)
Subject: Re: Re: "self-styled philosophers"
>For example, in his recent article, he recounts an "experiment"
>he conducted to show that chessplayers do not use reasoning very
>much, but just play instinctively. This experiment consisted of
>an International Master playing against a weaker player. The IM
>was forced to add a sequence of numbers while playing, thus
>supposedly occupying his reasoning capability. The IM won anyway
I just repeated this experiment and I think he is right. I forced
my SUN to add sequences of numbers while playing chess with me and
I lost.
Here, do it yourself:
main()
{
int i,j;
for(;;) for(i=j=0; i < 10000 ; i++) j += i ;
}
save this in file foo.c, compile with 'cc foo.c' and say:
a.out & (runs it in the background)
chesstool
it slows it down only a tad, barely noticeable, but I still keep losing!
AMAZING! my computer is human!
-Barry Shein, Boston University
------------------------------
End of AIList Digest
********************
∂23-Feb-86 1525 LAWS@SRI-AI.ARPA AIList Digest V4 #35
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 23 Feb 86 15:25:18 PST
Date: Sun 23 Feb 1986 11:39-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #35
To: AIList@SRI-AI
AIList Digest Sunday, 23 Feb 1986 Volume 4 : Issue 35
Today's Topics:
Games - Computer Othello & Computer Chess,
Automata - Self-Replication and Artificial Life,
Methodology - A Thought on Thinking,
Humor - AI Koans & The Naive Dog Physics Manifesto
----------------------------------------------------------------------
Date: 16 Feb 86 21:34:13 EST
From: Kai-Fu.Lee@SPEECH2.CS.CMU.EDU
Subject: Computer Othello (Bill)
[Forwarded from the CMU bboard by Laws@SRI-AI.]
In the recent North American Computer Othello Championship Tournament,
our department's entry, BILL, placed second in a field of 11. The
final standings were:
1. Aldaron (C. Heath) 7.5 - 0.5
2. Bill (K. Lee & S. Mahajan) 7 - 1
3. Brand (A. Kierulf) 5 - 3
3. Fort Now (?) 5 - 3
Bill's only loss was to Aldaron, the defending champion, as well as
the program that should have beaten Iago in 1981. However, Bill's
loss was due to the choice of color in the game with Aldaron. In an
unofficial rematch with Aldaron, Bill won with the colors reversed.
Furthermore, Bill soundly defeated the program that tied Aldaron.
With the many improvements that we have in mind and the enthusiastic
participation this year, we expect an exciting championship next year.
If anyone is interested in more information about Bill, this tournament,
or the game transcripts, please send mail to kfl@speech2 or mahajan@h.
------------------------------
Date: 17 February 1986 1954-EST
From: Hans Berliner@A.CS.CMU.EDU
Subject: computer chess (final)
[Forwarded from the CMU bboard by Laws@SRI-AI.]
The Eastern Team championship is essentially over. Hitech won and
drew today, producing a final score of 5.5 - .5. It played
remarkably well. Outside of falling into an opening trap due to a
deficiency in its book, and being outplayed a little in game four but
recovering when the opponent made an error, its play is above
criticism. It played mainly against expert level players, a class
that is almost extinct in Pittsburgh, and beat every one of them. It
drew its final game with a strong master rated nearly equal (2291) to
Hitech. It had black in 4 games, and white in two; a noticable
disadvantage. Mike Valvo who directs the ACM tournaments played on
board one for the team and finished with a score of 4.5 to 1.5.
Hitech played on board two, and Belle played on board three. Belle
apparently has had a hardware overhaul, and played much better than
it had recently. However, on a comparison basis, Belle scored 5- 1,
losing in the last round, and it had 4 whites and two blacks and
played against slightly weaker opponents than Hitech. The fourth
board human on the team was a catastrophe, scoring less than 50%.
The crucial match was in the fifth round and ended in a draw with
both computers winning and both humans losing, thus making the match
a draw and ruining our chances of winning the title (the team had won
all its previous matches). In the final round, there are still some
unfinished games, but the team should do no worse than draw, giving a
team record of 5 -1 (two drawn matches). Overall, it is safe to say
that on our team the species @u[robot sapiens] far outperformed the
species @u[homo sapiens].
------------------------------
Date: Wed, 19 Feb 86 10:20:20 EST
From: Chris←Langton%UMich-MTS.Mailnet@MIT-MULTICS.ARPA
Subject: Artificial Life
I read in the ailist a series of comments on the size of self-reproducing
systems (ARPA.AIList, Volume 3, Issue 71, 06/01/85 - starting with the
message from zim@mitre of 05/24/85 )
I have published an article wherein I exhibit a self-reproducing configuration
embedded in a cellular automaton which occupies a mere 10x15 cell rectangle.
The construction is based on a modification of one of Codd's components
(see Codd: Cellular Automata) in his simplification of von Neumann's
self-reproducing machine. My article is published in: Physica 10D (1984)
North Holland, pp 135-144, entitled 'self-reproduction in cellular automata'.
Basicly, this configuration consists of a looped pathway with a construction
arm extending out from one corner. Signals cycling around the loop cause
the construction arm to be extended by a certain amount and then cause a
90 degree corner to be built. as this sequence is executed 4 times (due
to the same signal sequence cycling around the loop 4 times), the four
sides of an offspring loop are built. When the extended construction arm
runs into itself, the resulting collision causes the two loops to detach
from each other and also triggers the construction of a new construction
arm on each loop. The new arm on the parent loop is located at the
next corner 'downstream' (in the sense of signal flow) from the original
site. Thus, the parent loop will go on to build another loop in a new
direction. Meanwhile, when the offspring was formed, a copy of the signal
sequence that serves as the description was trapped inside it when the
two detached from one another, thus it, too, goes on to build offspring.
The result is a growing colony of loops that expands out into the array,
consisting of a reproductive outer fringe surrounding a growing 'dead'
core, in the manner of a coral reef or the cross section of a tree.
Details are to be found in the article. Although this construction is
not capable of universal construction or computation, it clearly
reproduces itself in a non-trivial manner, unlike the reproduction under
modulo addition rules, of which Fredkin's reproducing system is an example.
I am also working on cellular automaton simulations of insect colonies and
artificial biochemistries. I have another article coming out in the proceedings
of the conference on 'Evolution, Games, & Learning' held at the Los Alamos
National Labs last May. It is entitled 'studying artificial life with cellular
automata'. There will be a video tape available soon from Aerial Press in
Santa Cruz which illustrates the self-reproducing loops as well as the
artificial insect colony simulations and other examples of `artificial life'.
I would be very interested in hearing from anybody who is working on anything
which might fall under the general heading 'artificial life'. I would also
like to try to get together a workshop, with computer support, where people
who have been working in this area could get together and have a 'jam session'
of sorts, and see each other's stuff. Any proceedings from such a workshop
would benefit greatly from having a video published along with it. If anybody
is interested in helping to organize such a workshop, send me a message. I
can be reached at: CGL%UMICH-MTS@MIT-MULTICS.ARPA
USPS: Christopher G. Langton / EECS Dept. / University of Michigan /
Ann Arbor MI 48109
MA-BELL (now divorced from PA-ATT) 313-763-6491
------------------------------
Date: Sat, 15 Feb 86 15:09:11 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: A Thought
From Vol 4 # 26:- "The idea is
to get kids to be more thoughtful about thinking by getting them to
try to think about how animals think, and by taking the results of
these comtemplations and actually building animal-like creatures that
work." Alan Kay.
From Vol 3 # ??:- Date: Tue, 12 Mar 85
"Just as man had to study birds, and was able to derive the underlying
mechanism of flight, and then adapt it to the tools and materials
at hand, man must currently study the only animal that thinks
in order to derive the underlying principles there also." Frank Ritter
I am struck by two (or more?) very different uses of the word "think"!
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...mcvax!ukc!kcl-cs!qmc-ori!gcj
------------------------------
Date: Thu, 13 Feb 86 00:08:02 PST
From: "Douglas J. Trainor" <trainor@LOCUS.UCLA.EDU>
Subject: a cuppla ai koans
from <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
One day SIMON was going to the cafeteria when he met WEIZENBAUM, who
said: "I have a problem for you to solve." SIMON replied, "tell me more
about your problem," and walked on.
===================================================================
from <Kelley.pa@Xerox.COM>
How long would a simulation of its own lifetime survive?
What is the rate of change of all metaphors for the viability of that rate?
===================================================================
someone resent me Gabriel's old '83 koan <robins@usc-isib>:
A famous Lisp Hacker noticed an Undergraduate sitting in front of a
Xerox 1108, trying to edit a complex Klone network via a browser.
Wanting to help, the Hacker clicked one of the nodes in the network
with the mouse, and asked "what do you see?" Very earnesty, the
Undergraduate replied "I see a cursor." The Hacker then quickly pressed
the boot toggle at the back of the keyboard, while simultaneously
hitting the Undergraduate over the head with a thick Interlisp Manual.
The Undergraduate was then Enlightened.
------------------------------
Date: Wed, 19 Feb 86 14:38 PST
From: Cottrell@NPRDC
Subject: The Naive Dog Physics Manifesto
From: Leslie Kaelbling <Kaelbling@SRI-AI.ARPA>
From: MikeDixon.pa@Xerox.COM
From: haynes@decwrl.DEC.COM (Charles Haynes)
SEMINAR
From PDP to NDP through LFG:
The Naive Dog Physics Manifesto
Garrison W. Cottrell
Department of Dog Science
Condominium Community College of Southern California
The Naive Physics Manifesto (Hayes, 1978) was a seminal paper in
extending the theory of knowledge representation to everyday phenomena.
The goal of the present work is to extend this approach to Dog Physics,
using the connectionist (or PDP) framework to encode our everyday,
commonsense knowledge about dogs in a neural network[1]. However,
following Hayes, the goal is not a working computer program. That is in
the province of so-called performance theories of Dog Physics (see, for
example, my 1984 Modelling the Intentional Behavior of the Dog). Such
efforts are bound to fail, since they must correspond to empirical data,
which is always changing. Rather, we will first try to design a
competence theory of dog physics[2], and, as with Hayes and Chomsky, the
strategy is to continually refine that, without ever getting to the
performance theory.
The approach taken here is to develop a syntactic theory of dog
actions which is constrained by Dog Physics. Using a variant of
Bresnan's Lexical-Functional Grammar, our representation will be an
context-free action grammar, with associated s-structures (situation
structures). The s-structures are defined in terms of Situation
Dogmatics[3], and are a partial specification of the situation of the
dog during that action.
Here is a sample grammar which generates strings of action
predicates corresponding to dog days[4], (nonterminals are capitalized):
Day -> Action Day | Sleep
Action -> Sleep | Eat | Play | leavecondo Walk
Sleep -> dream Sleep | deaddog Sleep | wake
Eat -> Eat chomp | chomp
Play -> stuff(Toy, mouth) | hump(x,y) | getpetted(x,y)
Toy -> ball | sock
Walk -> poop Walk | trot Walk | sniff Walk | entercondo
Several regularities are captured by the syntax. For example,
these rules have the desirable property that pooping in the condo is
ungrammatical. Obviously such grammatical details are not innate in the
infant dog. This brings us to the question of rule acquisition and
Universality. These context-free action rules are assumed to be learned
by a neural network with "hidden" units[5] using the bark propagation
method (see Rumelhart & McClelland, 1985; Cottrell 1985). The beauty of
this is that Dogmatic Universality is achieved by assuming neural
networks to be innate[6].
The above rules generate some impossible sequences, however. This
is the job of the situation equation annotations. Some situations are
impossible, and this acts as a filter on the generated strings. For
example, an infinite string of stuff(Toy, mouth)'s are prohibited by the
constraint that the situated dog can only fit one ball and one sock in
her mouth at the same time. One of the goals of Naive Dog Physics is to
determine these commonsense constraints. One of our major results is
the discovery that dog force (df) is constant. Since df = mass *
acceleration, this means that smaller dogs accelerate faster, and dogs
at rest have infinite mass. This is intuitively appealing, and has been
borne out by my dogs.
←←←←←←←←←←←←←←←←←←←←
[1]We have decided not to use FOPC, as this has been proven by Schank
(personal communication) to be inadequate, in a proof too loud to fit in
this footnote.
[2]The use of competence theories is a standard trick first intro-
duced by Chomsky, which avoids the intrusion of reality on the theory.
An example is Chomsky's theory of light bulb changing, which begins by
rotating the ceiling...
[3]Barwoof & Peppy (1983). Situation Dogmatics (SD) can be regarded
as a competence theory of reality. See previous footnote. Using SD is a
departure from Hayes, who exhorts us to "understand what [the represen-
tation] means." In the Gibsonian world of Situation Dogmatics, we don't
know what the representation means. That would entail information in
our heads. Rather, following B&P, the information is out there, in the
dog. Thus, for example, the dog's bark means there are surfers walking
behind the condo.
[4]Of course, a less ambitious approach would just try to account for
dog day afternoons.
[5]It is never clear in these models where these units are hidden, or
who hid them there. The important thing is that you can't see them.
[6]Actually this assumption may be too strong when applied to the
dogs under consideration. However, this is much weaker than Pinker's as-
sumption that the entirety of Joan Bresnan's mind is innate in the
language learner. It is instructive to see how his rules would work
here. We assume hump(x,y) is innate, and x is bound by the default s-
function "Self". The first time the puppy is humped, the mismatch
causes a new Passive humping entry to be formed, with the associated
redundancy rule. Evidence for the generalization to other predicates is
seen in the puppy subsequently trying to stuff her mouth into the ball.
------------------------------
End of AIList Digest
********************
∂23-Feb-86 1748 LAWS@SRI-AI.ARPA AIList Digest V4 #34
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 23 Feb 86 17:48:24 PST
Date: Sun 23 Feb 1986 11:28-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #34
To: AIList@SRI-AI
AIList Digest Sunday, 23 Feb 1986 Volume 4 : Issue 34
Today's Topics:
Seminar - Inferring Domain Plans in Question Answering (SRI),
Course - Connectionist Summer Workshop Reminder (CMU),
Conference - Expert Database Systems Advance Program
----------------------------------------------------------------------
Date: Thu 20 Feb 86 18:04:16-PST
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - Inferring Domain Plans in Question Answering (SRI)
INFERRING DOMAIN PLANS IN QUESTION-ANSWERING
Martha E. Pollack (POLLACK@SRI-AI)
AI Center, SRI International
11:00 AM, MONDAY, February 24
SRI International, Building E, Room EJ228 (new conference room)
The importance of plan inference (PI) in models of conversation has been
widely noted in the computational-linguistics literature, and its
incorporation into question-answering systems has enabled a range of
cooperative behaviors. The PI process in each of these systems, however, has
assumed that the questioner (Q) whose plan is being inferred and the
respondent (R) who is drawing the inference have identical beliefs about the
actions in the domain. In this talk I will argue that this assumption is too
strong, and often results in failure not only of the PI process, but also of
the communicative process that PI is meant to support. In particular, it
precludes the principled generation of appropriate responses to queries that
arise from invalid plans. I will present a model of PI in conversation that
distinguishes between the beliefs of the questioner and the beliefs of the
respondent. This will rest on an account of plans as mental phenomena:
"having a plan" will be analyzed as having a particular configuration of
beliefs and intentions. Judgements that a plan is invalid will be associated
with particular discrepancies between the beliefs that R ascribes to Q, when
R believes Q has some particular plan, and the beliefs R herself holds.
An account of different types of plan invalidities will be given, and shown
to provide an explanation for certain regularities that are observable in
cooperative responses to questions.
------------------------------
Date: 18 Feb 86 20:04 EST
From: Dave.Touretzky@A.CS.CMU.EDU
Subject: Course - Connectionist Summer Workshop Reminder (CMU)
Connectionist Summer Workshop Reminder
This is a reminder that the deadline for applying to attend the connectionist
summer workshop to be held June 20-29 at Carnegie Mellon is March 1st.
Applications are welcomed from graduate students and recent Ph.D.'s and
M.D.'s who are actively involved in connectionist research.
---> This is not just a summer school for training new
connectionists, as a previous announcement may have
implied. We plan to organize small working groups and hold
lively discussions with visiting speakers. New research
will be presented and people are encouraged to bring their
software for demos; we'll supply the machines.
To apply, send a copy of your vita and one relevant paper, technical
report, or research proposal to: Dr. David Touretzky, Computer Science
Department, Carnegie Mellon University, Pittsburgh, PA 15213.
------------------------------
Date: 12 Feb 86 13:15:00 GMT
From: sdcsvax!ncr-sd!ncrcae!usceast!kersch@ucbvax.berkeley.edu
(Larry Kerschberg)
Subject: Conference - Expert Database Systems -- Advance Program
Conference Advance Program and Registration Forms
First International Conference on Expert Database Systems
Sheraton Charleston Hotel
April 1-4, 1986
Sponsored by:
Institute of Information Management, Technology and Policy
College of Business Administration
University of South Carolina
In Cooperation With:
American Association for Artificial Intelligence (AAAI)
Association for Computing Machinery -- SIGMOD, SIGART, and SIGPLAN
IEEE Computer Society -- Technical Committee on Data Base Engineering
Agence de l'Informatique, France
Tuesday, April 1, 1986
Tutorial Day
8:30 am - 12:00 pm Morning Parallel Tutorials I
IA: Introduction to Artificial Intelligence
Instructor: Dr. Elaine Rich, MCC, Austin, Texas
Dr. Rich is currently leading a natural language research team at
MCC. She is the author of the widely-read book, Artificial
Intelligence, as well as numerous technical papers.
Course Description: This tutorial will provide an introduction to the
important concepts and techniques of Artificial Intelligence (AI).
The major topics are: What is an AI technique?; Problem solving as
heuristic search; Heuristic search techniques such as hill climbing,
best first search, problem decomposition, constraint satisfaction;
Knowledge representation and inference including logic-based methods,
default reasoning, slot and filler methods and production rules.
IB: Database Management
Instructor: Professor Michael Stonebraker, UC - Berkeley, California
Dr. Stonebraker is a full professor of Computer Science at the
University of California, Berkeley. He is the original implementor of
the INGRES system and is a co-founder of Relational Technology, Inc.
which markets INGRES to engineering and business users.
Course Description: This tutorial will provide an overview of
Database Management. The major topics are: Traditional data models
and query languages including network, hierarchical, and relational
models; Database services such as transaction management, query
optimization, protection, views, integrity control; New approaches to
data models including semantic data models, logic programming, CAD/CAM
data models; Themes of Expert Database Systems such as extended views,
active databases, procedural objects, inheritance, and new data types.
1:30 pm - 5:00 pm Afternoon Parallel Tutorials II
IIA: Expert Systems -- An Introduction
Instructor: Professor Charles Rich, MIT, Cambridge, Massachusetts
Dr. Rich is Principal Research Scientist at the Artificial
Intelligence Laboratory of Massachusetts Institute of Technology. He
is co-principal investigator of the Programmer's Apprentice Project at
MIT.
Course Description: This is an introductory tutorial for those who
intend to develop or manage the development of new expert systems, as
well as those who want to evaluate the potential for using expert
systems in their own work. No previous background is assumed. The
topics include: Expert systems features including expert-level
performance, symbolic and heuristic information, and the separation of
Knowledge from Inference; Application areas for expert systems;
Programming techniques used for expert system development including
rules, frames, logic programming; and the use of incremental
prototypes for expert systems development.
IIB: Logic Programming and Databases
Instructor: Dr. Steve Hardy, Teknowledge, Inc., Palo Alto, California
Dr. Hardy is currently Product Manager at Teknowledge. He was the
Principal Designer of the M.1 Expert System Shell.
Course Description: This tutorial will provide an overview of the
important concepts relating to logic programming and databases. The
major topics are: Logic and databases; Prolog: A logic language;
Prolog: Its practical difficulties; High-level logic languages
including shells for Prolog; Current applications; What the future
holds.
Wednesday, April 2, 1986
8:00-12:00 am Registration
8:45-9:00 am Opening Remarks
Chairman: Donald A. Marchand, University of South Carolina, USA
9:00-10:00 am Keynote Address
Chairman: Larry Kerschberg, University of South Carolina, USA
To be announced
Ronald J. Brachman and Hector J. Levesque*, AT&T Bell Labs, USA
and University of Toronto*, Canada
10:00-10:30 am Coffee Break
10:30-12:00 am Session 1: Object-Oriented Systems
Chairman: Reid Smith, Schlumberger-Doll Research, USA
Object Prototypes and Database Samples for Expert Database Systems
G.T. Nguyen, IMAG, Universite de Grenoble, France
Displaying Database Objects
D. Maier, P. Nordquist* and M. Grossman, Oregon Graduate
Center and Intel Corp.*, USA
A Personal Universal Filing System Based on the Concept-Relation Model
H. Fujisawa, A. Hatakeyama and J. Higashino, Hitachi, Ltd., Japan
12:00-1:30 pm Lunch
1:30-3:00 pm Afternoon Parallel Sessions
Session 2A: Theory of Knowledge Bases
Chairman: Setsuo Ohsuga, University of Tokyo, Japan
Control of Processes by Communication over Ports as a Paradigm for
Distributed Knowledge-Based System Design
A.S. Cromarty, Advanced Information and Decision Systems, USA
Representing and Manipulating Knowledge Within "Worlds"
H. Kaufmann and A. Grumbach*, C.G.E..-- Laboratoires de
Marcoussis and Ecole Superieure d'Electricite*, France
Completeness and Consistency in Knowledge Base Systems
W. Marek, University of Kentucky, USA
Session 2B: Intelligent Database Interfaces
Chairman: Bonnie L. Webber, University of Pennsylvania, USA
Supporting Goal Queries in Relational Databases
A. Motro, University of Southern California, USA
Design and Experimentation of IR-NLI: An Intelligent User Interface
to Bibliographic Databases
G. Brajnik, G. Guida and C. Tasso, Universita di Udine, Italy
When does Non-Linear Text Help?
D. Shasha, New York University, USA
3:00-3:30 pm Coffee Break
3:30-5:00 pm Panel Session: Are Data Models Dead?
Chairman: Michael L. Brodie, Computer Corporation of America, USA
6:30-9:30 pm Great Gatsby Night
Thursday, April 3, 1986
8:30-10:00 am Session 4: Knowledge System Architectures
Chairman: Michele Missikoff, IASI-CNR, Italy
The Do-Loop Considered Harmful in Production System Programming
M. van Biema, D.P. Miranker and S.J. Stolfo, Columbia
University, USA
A Relational Representation for Knowledge Bases
R.M. Abarbanel and M.D. Williams, IntelliCorp, USA
Interfacing Relational Databases and Prolog Efficiently
S. Ceri, G. Gottlob and G. Wiederhold, Stanford University, USA
10:00-10:30 am Coffee Break
10:30-12:00 am Morning Parallel Sessions
Session 5A: Deductive Databases
Chairman: D. Stott Parker, Jr., UCLA and Silogic, USA
Negative Queries in Horn Databases
Shamin Naqvi, AT&T Bell Laboratories, USA
Safety and Compilation of Non-Recursive Horn Clauses
Carlo Zaniolo, MCC, USA
Recursive Axioms in Deductive Databases: The Query/Subquery Approach
L. Vieille, European Computer-Industry Research Center (ECRC),
West Germany
Session5B: Reasoning in Expert Database Systems
Chairman: James Bezdek, University of South Carolina, USA
Evaluation of Recursive Queries Using Join Indices
P. Valduriez and H. Boral, MCC, USA
An Algebraic Approach to Recursive Inference
Y.E. Ioannidis and E. Wong, University of California - Berkeley, USA
A Fuzzy Relational Calculus
A. Zvieli, Louisiana State University, USA
12:00-1:30 pm Lunch
1:30-3:30 pm Afternoon Parallel Sessions
Session 6A: Semantic Query Optimization
Chairman: Matthias Jarke, New York University, USA
A Knowledge-Based Approach to Query Optimization
C.V. Malley and S.B. Zdonik, Brown University, USA
Semantic Query Optimization: Additional Constraints and Control
Strategies
U.S. Chakravarthy, J. Minker and J. Grant*, University of
Maryland and Towson State University*, USA
Integrity Enforcement on Prolog-based Deductive Databases
H. Decker, ECRC, West Germany
Session 6B: Knowledge-Based Modeling and Design
Chairman: Edgar H. Sibley, George Mason University, USA
Modeling Linguistic User Interfaces
M. Pilote, Toronto, Canada
How Abstraction Can Reduce Ambiguity in Explanation Problems
S. Letovsky, Yale University, USA
A Framework for Design/Redesign Experts
A.L. Furtado, M.A. Casanova* and L. Tucherman*, Pontificia
Universidade Catolica do Rio de Janeiro and IBM do Brasil*, Brazil
Flexible Interfaces and the Support of Physical Database Design Reasoning
M. Prietula and G. Dickson*,Dartmouth College and University
of Minnesota*, USA
3:30-4:00 pm Coffee Break
4:00-5:30 pm 7. Panel Session: Inference in Expert Database Systems
Chairman: Herve Gallaire, ECRC, West Germany
6:00-9:00 pm Red, White and Bluegrass Night
Friday, April 4, 1986
8:00-10:00 am Session 8: Knowledge Management
Chairman: Alain Pirotte, Philips Research Lab, Belgium
An Analysis of Rule Indexing Implementations in Data Base Systems
M. Stonebraker, T. Sellis and E. Hanson, UC-Berkeley, USA
Querying a Rule Base
L. Cholvy and R. Demolombe, Centre d'Etudes et de Recherches
de Toulouse, France
Updating Propositional Formulas
A. Weber, Universitat Karlsruhe, West Germany
Invited Lecture: Beyond the Knowledge Level
Mark. S. Fox, Carnegie-Mellon University, USA
10:00-10:30 am Coffee Break
10:30-12:00 am 9. Panel Session: Open Issues in Expert Database Systems
Chairman: Robert Balzer, USC- Information Sciences Institute, USA
12:00-12:15 pm Closing Ceremony
Chairman: Donald A. Marchand, University of South Carolina, USA
All Payments must be made in US Currency. Make checks payable to the
Institute of Information Managment, Technology and Policy and mail the
form to
Ms. Libby Shropshier, Conference Treasurer
Institute of IMTP
College of Business Administration
University of South Carolina
Columbia, SC, 29208
Telephone: (803) 777-5766
[The original included conference and hotel registration forms. -- KIL]
------------------------------
End of AIList Digest
********************
∂26-Feb-86 1512 LAWS@SRI-AI.ARPA AIList Digest V4 #36
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Feb 86 15:12:41 PST
Date: Wed 26 Feb 1986 10:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #36
To: AIList@SRI-AI
AIList Digest Wednesday, 26 Feb 1986 Volume 4 : Issue 36
Today's Topics:
Seminars - Solution to the Self-Referential Paradoxes (CSLI) &
Approximate Deduction in Single Evidential Bodies (SRI) &
Refutation Method for Horn Clauses with Equality (UPenn) &
Persistent Memory (SU),
Conferences - Suggestions for AAAI-86 &
Theoretical Issues in NL Processing
----------------------------------------------------------------------
Date: Mon 24 Feb 86 09:04:40-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - Solution to the Self-Referential Paradoxes (CSLI)
CSLI COLLOQUIUM
LOGIC OF POINTERS AND EVALUATIONS:
THE SOLUTION TO THE SELF-REFERENTIAL PARADOXES
Haim Gaifman
Mathematics Department The Hebrew University Jerusalem Israel
Visiting at SRI
February 27, 1986
Ventura Hall
Imagine the following exchange:
Max: What I am saying at this very moment is nonsense.
Moritz: Yes, what you have just said is nonsense.
Evidently Max spoke nonsense and Moritz spoke to the point. Yet Max
and Moritz appear to have asserted the same thing, namely: that Max
spoke nonsense. Or consider the following two lines:
line 1: The sentence written on line 1 is not true.
line 2: The sentence written on line 1 is not true.
Our natural intuition is that the self-referring sentence on line 1 is
not true (whatever sense could be made of it). Therefore the sentence
on line 2, which asserts this very fact, should be true. But what is
written on line 2 is exactly the same as what is written on line 1.
I shall argue that the unavoidable conclusion is that truth values
should be assigned here to sentence-tokens and that any system in
which truth is only type-dependent (e.g., Kripke's system and its
variants) is inadequate for treating the self-referntial situation.
Since the truth value of a token depends on the tokens to which it
points, whose values depend in their turn on the tokens to which they
point,and so on, the whole network of pointings (which might include
complicated loops) must be taken into account.
I shall present a simple formal way of representing such networks and
an algorithm for evaluating the truth values. On the input 'the
sentence on line 1' it returns GAP but on the input 'the sentence on
line 2' it returns TRUE. And it yields similarly intuitive results in
more complicated situations. For an overall treatment of
self-reference the tokens have to be replaced by the more general
pointers. A pointer is any obgect used to point to a sentence-type (a
token is a special case of pointer it points to the sentence of which
it is a token). Calling a pointer is like a procedural call in a
program, eventually a truth valye (TRUE, FALSE or GAP) is returned -
which is the output of the algorithm.
I shall discuss some more recent work (since my last SRI talk) -
variants of the system and its possible extensions to mathematical
powerful languages. Attempts to make such comprehensive systems throw
new light on the problem of constructing "universal languages".
------------------------------
Date: Mon 24 Feb 86 15:00:13-PST
From: RUSPINI@SRI-AI.ARPA
Subject: Seminar - Approximate Deduction in Single Evidential Bodies (SRI)
AURA (Automated Uncertainty Reasoning Assembly) is about to resume its
AURAcles after some months of suspended animation. The next talk
(abstract below) is scheduled for next Friday, February 28, 10AM at
EK242. We plan to meet as regularly as possible each Friday thereafter
at the same time.
APPROXIMATE DEDUCTION IN
SINGLE EVIDENTIAL BODIES
Enrique H. Ruspini
Artificial Intelligence Center
SRI International
The main objective of this talk is the review of ongoing research on
the interpretation and manipulation of conditional evidence within
single evidential bodies. In the context of a single body of evidence,
conditional evidence is expressed as constraints on the possible
values of propositional truth under the assumption that a specific
proposition within the frame of discernment is known to be true. In
this context deductive inference consists of the combination of the
information about the probable truth of ground propositions (facts)
and conditional evidence (rules) to arrive at new (a posteriori)
estimates of propositional support. This process is both conceptually
and procedurally different from those undertaken when several bodies
of evidence are combined (e.g. using the Dempster Combination Rule).
The role of conditional evidence constraints (henceforth called
approximate or uncertain rules) is examined from the viewpoint of both
the theory of interval probabilities and the Dempster-Shafer Calculus
of Evidence. These approaches to the representation and analysis of
uncertain information will be briefly described together with their
theoretical underpinnings. Several possible interpretations of
approximate rules will be discussed and compared. Possible approaches
for the automation of approximate deduction (under each
interpretation) will also be presented.
Time permitting, the role of these results in the generalization of
Reynold's approach to the generation of support and elementary mass
measures will also be discussed.
------------------------------
Date: Mon, 24 Feb 86 17:25 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Refutation Method for Horn Clauses with Equality (UPenn)
Forwarded From: Dale Miller <Dale@UPenn> on Mon 24 Feb 1986 at 17:08
UPenn Math-CS Logic Seminar
A Refutation Method for Horn Clauses with Equality using E-unification
Jean H. Gallier (with Stan Raatz)
Tuesday, 25 February 1986, 4:30 - 6:00, 4E17 DRL
A refutation method for equational Horn clauses, Horn clauses with or
without equational atoms, is investigated. This method combines standard
SLD-resolution and unification modulo equations. In the case of ground Horn
clauses, unsatisfiability of a set of Horn clauses with equality is
decidable in time O(nlog(n)). In the general case however, even though the
refutation method itself is complete, unification modulo equations is
undecidable. In fact, unification modulo equations is NP-complete even in
the case of ground equations. Considering this point, we explore subcases
of equational Horn clauses for which unification modulo equations is
tractable, and consider the implications for logic programming. Finally, we
compare this new method with other existing methods.
** Next week: G. Rosolini from CMU will speak on "Categories for Partial
Computations".
------------------------------
Date: Mon, 24 Feb 86 23:02:40 pst
From: David Cheriton <cheriton@su-pescadero.arpa>
Subject: Seminar - Persistent Memory (SU)
PERSISTENT OBJECT SYSTEM FOR SYMBOLIC COMPUTERS
Satishe Thatte
Texas Instruments
Thurs. Feb 27th at 4:15 pm.
MJH 352
(Part of Distributed Systems Group Project meeting)
The advent of automatically managed, garbage-collected virtual memory
was crucial to the development of today's symbolic processing. No
analogous capability has yet been developed in the domain of
"persistent" objects managed by a file system or database. As a
consequence, the programmer is forced to flatten rich structures of
objects resident in virtual memory before the objects can be stored in a
file system or conventional database. This task puts a great burden on
the programmer and adversely affects system performance.
A persistent object system that extends the automatic storage management
concepts of a symbolic computer to the domain of persistent objects will
be presented. The system supports long-term, reliable retention of
richly structured objects in virtual memory itself, without resorting to
a file system. Therefore, the system requires a crash recovery scheme
at the level of virtual memory.
The persistent object system is based on a uniform memory abstraction,
which eliminates the distinction between transient objects (data
structures) and persistent objects (files and databases), and therefore,
allows the same set of powerful and flexible operations with equal
efficiency on both transient and persistent objects from a programming
language such as Lisp or Prolog, without requiring a special-purpose database
language. It is expected that the exploitation of such a capability
will lead to significant breakthroughs in knowledge/data base
management.
------------------------------
Date: 25 Feb 86 1016 PST
From: Bob Filman <REF@SU-AI.ARPA>
Subject: Conference - Suggestions for AAAI-86
The deadline for workshop and panel proposals for AAAI-86 is
fast approaching. (Officially, March 1, but we'll give a
few days grace to good ideas.)
Requests for ENGINEERING panels and workshops should be sent to:
Tom Kehler
Program Co-Chairman for AAAI-86
Intellicorp
1975 EL Camino Real West
Mountain View, California 94040
Kehler@USC-ECL.ARPA
Requests for SCIENTIFIC panels and workshops should be sent to:
Stan Rosenschein
Program Co-Chairman for AAAI-86
SRI International
333 Ravenswood Avenue
Menlo Park, California 94025
Stan@SRI-AI.ARPA
------------------------------
Date: Mon, 24 Feb 86 15:58:56 mst
From: "Yorick Wilks <yorick@nmsu>" <yorick@CSNET-RELAY.ARPA>
Subject: Conference - Theoretical Issues in NL Processing
TINLAP3
Third workshop on
Theoretical Issues in Natural Language Processing.
Las Cruces, New Mexico
January 7-9, 1987.
The workshop, supported by the Association for Computational
Linguistics, will follow the format of its predecessors at
MIT (1975), Champaign-Urbana (1978) and Nova Scotia (1985):
panels of distinguished figures in computational linguistics,
AI, and related disciplines will discuss the major topics at issue.
Preliminary registration information: Yorick Wilks, Box3CRL, NMSU, Las
Cruces, NM 88001, or CSNET:az@nmsu.
------------------------------
End of AIList Digest
********************
∂27-Feb-86 0523 LAWS@SRI-AI.ARPA AIList Digest V4 #37
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 Feb 86 05:23:28 PST
Date: Wed 26 Feb 1986 22:08-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #37
To: AIList@SRI-AI
AIList Digest Thursday, 27 Feb 1986 Volume 4 : Issue 37
Today's Topics:
Queries - Reviewers for Expert Systems in Government &
Civil Engineering CAD/CAE/Expert Systems &
Theorem Provers & Knowledge Representation Translation &
Rete Algorithm & Lisp for the PRIME & Dec AI VaxStation &
ICAI & Visual Programming Languages & Associative Memory &
Prolog Books
----------------------------------------------------------------------
Date: Wed, 26 Feb 86 10:26:56 -0500
From: Duke Briscoe <duke@mitre.ARPA>
Subject: Reviewers for ESIG papers
Volunteers are needed to act as reviewers for the Second Expert Systems
in Government Conference, which will be held from Oct. 20-24, 1986.
The topics of the conference are knowledge based applications and
supporting technologies. A full description of the conference was given
in the Vol. 3 Issue 186 AIList, on December 15. If you wish to be a
reviewer, please identify your interests and send your name, address, and
phone number to karna@mitre or use US mail to
Dr. Kamal N. Karna
AI Center
The Mitre Corporation
1820 Dolley Madison Blvd.
McLean, VA 22102
------------------------------
Date: 25 Feb 86 07:59:56 EST
From: Mary.Lou.Maher@CIVE.RI.CMU.EDU
Subject: civil engineering exert systems
I am preparing a report for the ASCE and US Army Corp on the use of expert
system techniques in civil engineering. I would appreciate a response from
anyone active in this area; all those who respond will be put on a mailing
list to receive the completed report. Some specific civil engineering
domains are: structural engineering, geotechnical engineering, construction
engineering, transportation engineering, and environmental engineering.
------------------------------
Date: 17 Feb 86 17:28:45 GMT
From: ulysses!mhuxr!mhuxt!houxm!whuxl!whuxlm!akgua!gatech!gitpyr!allen
@ucbvax.berkeley.edu
Subject: Looking for publication
I am trying to locate a source for a publication referenced as
"Knowledge Engineering in Computer-Aided Design", IFIP, Sep-1984
I would also be interested in any work going on in the area of expert
systems in the field of Civil Engineering Computer Aided Engineering.
In particular, I would be interested in learing more about work
going on at Carnegi-Mellon on KADBASE. (H.C. Howard, D.R. Rehak, are
you out there ?)
--
"It's quite easy, if you don't know how.
That's the important bit. Be not at all
sure how you're doing it."
-Arthur Dent
P. Allen Jensen
Manager, Systems Division
GTICES Systems Laboratory
Department of Civil Engineering
Georgia Insitute of Technology
Atlanta Georgia, 30332-0355
...!{akgua,allegra,amd,hplabs,ihnp4,masscomp,ut-ngp}!gatech!gitpyr!allen
------------------------------
Date: 15 Feb 86 18:59:40 GMT
From: ihnp4!stolaf!mmm!umn-cs!hyper!mark@ucbvax.berkeley.edu (Mark Mendel)
Subject: WANTED: Theorem Provers
I would like to get my hands on a PD or otherwise free theorem prover.
Anything from resolution to Boyer-Moore would be OK. Lisp preferable, though
C would be OK.
Please respond via mail.
Also, I think that such a thing really should be in the mod.sources archive.
So if you offer me something you've written, please indicate whether it's OK if
I submit it.
Thanks in advance,
Mark G. Mendel
{ihnp4,umn-cs}!hyper!mark
------------------------------
Date: Mon, 24 Feb 86 10:08 EST
From: Kurt Godden <godden%gmr.csnet@CSNET-RELAY.ARPA>
Subject: Knowledge Representation and Translation
Could anyone send to me or post to the net references on conversion of
knowledge from one representational structure to another? For example,
translating between frames and semantic nets would be of interest. If not
directly related to explicit translation, articles discussing >formal<
(non-)equivalence between/among various representations of knowledge is
also of interest. If there are no postings directly to the net, I will
summarize and post anything of general interest I may receive.
-Kurt Godden
godden.gmr@csnet-relay (or, if that doesn't work: godden%gmr@csnet-relay)
------------------------------
Date: Wed 26 Feb 86 17:02:38-PST
From: Matt Heffron <BEC.HEFFRON@USC-ECL.ARPA>
Subject: Query -- Rete Algorithm
Would someone please send me the reference(s) describing the Rete algorithm.
Also, any words of wisdom from people who have tried/succeeded in
implementing the algorithm would be appreciated. Reply to me directly at:
BEC.HEFFRON@USC-ECL.ARPA
or,
Matt Heffron
Beckman Instruments, Inc.
2500 Harbor Blvd. MS X-11
Fullerton, CA 92634
Thanks,
Matt Heffron
------------------------------
Date: Wed, 19 Feb 86 12:18:07 CST
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Lisp for the PRIME?
I would like to make the following inquiry for a friend.
Does anyone know of any versions of LISP which will run
on the Prime 9750? They are particularly interested in
getting a version of Common Lisp if this is possible.
Also, is there any expert systems such as OPS5 which will
run on the same machine?
Post your answers on the List or send them to me.
Thanks a lot.
glenn
------------------------------
Date: Wed, 26 Feb 86 15:18:08 est
From: nikhil@NEWTOWNE-VARIETY.LCS.MIT.EDU (Rishiyur S. Nikhil)
Subject: Opinion on Dec AI VaxStation?
A friend of mine from India (Prof. Rajeev Sangal, Indian Institute of
Technology, Kanpur) is looking into buying Lisp machines for AI research.
Because of lack of maintenance, support etc. in India, he must rule out
Symbolics, LMI, TI, Xerox, etc. The one exception seems to be an AI
VaxStation from DEC (DEC is represented in India).
So, he would like to obtain opinions about the DEC AI VaxStation from anyone
who has used it. If you are/have been a user, I would appreciate it if you
could send me your appraisal. Reply to (ARPAnet):
nikhil@xx.lcs.mit.edu
and I will pass it on to him. If there is interest, I can also summarize my
findings to this list.
Thanks in advance for your help.
Rishiyur Nikhil
------------------------------
Date: Wed, 26 Feb 86 22:23:38 -0500
From: bradley@ATHENA.MIT.EDU
Subject: ICAI
As part of the newly formed Intelligent Engineering Systems
Laboratory at MIT, I am working on (hopefully) intelligent tutoring
systems for engineering applications. I was curious what sorts of
tutoring strategies and knowledge representation schemes other researchers
in the ICAI area are using. If anyone would be so kind as to send a
description of what they've found works/doesn't work for the applications
they are working on, or even a sample system for me to play with, with
comentary, I would be eternally grateful.
Also, is anyone interested in forming a mail group to discuss
ICAI issues (and not bore everyone else)?
-Steve Bradley
[The mail group already exists in the form of AI-Ed@SUMEX-AIM.
I have forwarded this message to them. -- KIL]
------------------------------
Date: Wed 26 Feb 86 10:48:09-PST
From: Marvin Zauderer <ZAUDERER@SU-SUSHI.ARPA>
Subject: Visual Programming Languages and AI
I'm starting some work on a visual programming language (VPL); in
particular, since I'm disappointed with the current state of software
authoring systems for educators, I'm planning to build such a system that
will run in/on top of an existing VPL.
I'm now in the process of doing some background research, and I've
assembled a fairly large number of references on the topics of
VPLs and authoring systems.
As you might imagine, the search space for the former topic is rather
immense, since the study of VPLs involves the study of so many
disciplines (e.g. cognitive science, AI, human-computer interaction,
programming environments, interactive graphics, visual thinking, etc.). Of
course, this is also precisely why I'm so interested in VPLs and VPL
applications.
I'd welcome any assistance in making the search space smaller: pointers
to references or to helpful people would be much appreciated. A nice side
effect of this search is the bibliography I'm creating; I will post it if
there is sufficient interest. Also, I'd be interested in starting a
discussion about VPLs and the connection between VPLs and AI.
As a final point, I've questioned whether or not this message belongs
in AIList, and I've decided that it does. I've reasoned that, in
building such systems, one must think about about how people think,
which is precisely the kind of thing AI researchers do. This may be a
rather flimsy justification, but I figure the worst that can happen is
an avalanche of angry mail.
Also: one would hope that the results of this thinking would go into the
kind of authoring system I'm describing. Since this seems relevant to the
topic of AI in Education, we've had some interesting discussions about
these issues recently on the AI-ED list. I still think there may be a
number of AIList readers interested in VPLs (and the associated issues) who
do not receive AI-ED.
Please correct/criticize me if you think a discussion of these issues does
not belong on AIList -- I don't want to clutter up the netwaves.
Cheers,
Marvin Zauderer
E-Mail: Zauderer@SU-SUSHI.ARPA
USMail: c/o IRIS-FAD
Cypress Hall, Room E-7
Stanford University
Stanford, CA 94305
Telephone: (415) 497-4540
(415) 725-3159
------------------------------
Date: 24 Feb 86 22:59:07 GMT
From: decvax!wanginst!ulowell!dobro@ucbvax.berkeley.edu (Chet Dobro)
Subject: Associative Memory
I have a question/observation/assumption that may be totally invalid, and
I fully expect to get jumped all over about, but here it is:
One of the biggest problems AI'ers seem to be having with their machines is
one of data access. Now, a human [or other sentient life-form :-)] has a
large pool of experience (commonly refered to as a swamp) that he/she/it has
access to.
It is linked together in many obscure ways (as shown by word-association
games) so that for any given thought (or problem) there are a vast number
(ususally) of (not-necessarily) connected replies.
Thinking of that swamp as a form of data-base, does the problem then boil
down to one of finding a path-key that would let you access all of the
cross-referances quickly?
Thoughts, please? (Hopefully constructive...)
Gryphon
------------------------------
Date: 26 Feb 86 14:21:00 EST
From: "INFO1::ELDER" <elder@info1.decnet>
Reply-to: "INFO1::ELDER" <elder@info1.decnet>
Subject: Prolog Books
Thanks.
P.S. If you reply to me, please drop off the '.DECNET' that may appear
in the header of my message. Our mailer has been acting funny lately.
My address is ELDER@WPAFB-INFO1 and not ELDER@WPAFB-INFO1.DECNET.
Greg Elder
------------------------------
End of AIList Digest
********************
∂27-Feb-86 0923 LAWS@SRI-AI.ARPA AIList Digest V4 #38
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 Feb 86 09:23:11 PST
Date: Wed 26 Feb 1986 22:35-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #38
To: AIList@SRI-AI
AIList Digest Thursday, 27 Feb 1986 Volume 4 : Issue 38
Today's Topics:
Query - Prolog Books,
AI Tools - Pointer to Logo & Arity/Prolog 4.0,
Binding - Ross Quinlan,
Humor - "Real" Story Behind MRS's Name & NL Dialogue System,
Comment - TI's Progress (SI Interactions Review),
Cognitive Psychology - Knowledge Structures,
Expert Systems - Software Engineering,
Knowledge Representation - The Community Authoring Project
----------------------------------------------------------------------
Date: 26 Feb 86 14:21:00 EST
From: elder@WPAFB-INFO1.ARPA
Subject: Prolog Books
Could someone recommend a good list of books about Prolog (besides
"Programming in Prolog" by Clocksin) which would be good for someone
to read who is justing learning the language?
Greg Elder
[This message was accidentally truncated in the last digest due to
the lack of a blank line following the header. -- KIL]
------------------------------
Date: 21 Feb 86 14:02:04 GMT
From: rochester!ritcv!rocksvax!rocksanne!sunybcs!ellie!rapaport@seismo
(William J. Rapaport)
Subject: Re: Re: Pointers to Logo?
>
> >> The only "texts" on Logo which I have thus far been able to locate
> >> are of the "How to Teach Logo to Your First Grade Class" variety.
> >> --
> >> Michael J. Hartsough
Try Brian Harvey, COMPUTER SCIENCE LOGO STYLE,
a series of 3 books, 2 of which have appeared, published
by MIT Press (isbn for the first, called "Intermediate
Programming" is 0-262-58072-1).
--
William J. Rapaport
Assistant Professor
Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260
(716) 636-3193, 3180
uucp: ...{allegra,decvax,watmath}!sunybcs!rapaport
...{cmcl2,hao,harpo}!seismo!rochester!rocksvax!sunybcs!rapaport
cs: rapaport@buffalo
arpa: rapaport%buffalo@csnet-relay
bitnet: rapaport@sunybcs
------------------------------
Date: 21 Feb 86 06:59:44 GMT
From: sdcsvax!noscvax!ogasawar@ucbvax.berkeley.edu (Todd H. Ogasawara)
Subject: Arity/Prolog 4.0 users out there?
I just received the Arity/Prolog 4.0 update to their interpreter and
compiler for the IBM PC a little while ago and have found this
implementation to be ever better and faster than the last (which was
very good).
Would be very interested to know if other netlanders are using
Arity/Prolog and, if so, what you are doing with it.
...todd
Todd Ogasawara, Computer Sciences Corp.
NOSC-Hawaii Laboratories
UUCPmail: {akgua,allegra,decvax,ihnp4,ucbvax}!sdcsvax!noscvax!ogasawar
MILNET: OGASAWAR@NOSC
------------------------------
Date: 17 Feb 86 16:02:00 GMT
From: pur-ee!uiucdcs!uiucdcsb!mozetic@ucbvax.berkeley.edu
Subject: Binding - Ross Quinlan
Re: Need source of ID3 for Machine Learning
Quinlan's adress is:
Ross Quinlan,
Head, School of Computing Science,
New South Wales Institute of Technology,
P.O. Box 123,
Broadway, 2007 New South Wales,
Australia
------------------------------
Date: Tue, 25 Feb 86 14:24:03 est
From: Russell Greiner <greiner%utai%toronto.csnet@CSNET-RELAY.ARPA>
Subject: "Real" Story behind MRS's name
> Date: Tue, 4 Feb 86 15:46:28 EST
> From: munnari!goanna.oz!wjb@seismo.CSS.GOV (Warwick Bolam)
> Subject: Correction to correction to name of MRS
>
> Is there anyone who REALLY knows what MRS stands for? I have a number of
> MRS documents and NONE of them says "MRS stand for ..."
Years ago, Mike genesereth, Russ greiner and dave Smith got together,
along with some other illustrious researchers, and decided to create
a new and better representation language. To achieve our original
objective of modifiability, the
Modifiable Representation System
was born. When we noticed that the only thing truly modifiable about it
was its name, it was rechristened the
Meta-level Representation System.
As this, too, seemed a bit misleading, we considered several other
names. Soon, we were forced to realize that we had an inherently
Misnamed Representation System,
which still seems its best name. (Of course, if this name really is
appropriate then it is, in fact, inappropriate. That, in turn, means
it is not misnamed, which means it is misnamed, which ...)
[Apology: The story above is basically correct; only the names have
been changed ...]
Russ Greiner
University of Toronto
(formerly of Stanford University).
------------------------------
Date: 20 Feb 86 20:59:21 GMT
From: tektronix!uw-beaver!ssc-vax!bcsaic!michaelm@ucbvax.berkeley.edu
(michael maxwell)
Subject: Re: Dialogue help please needed ?
In article <720@aimmi.UUCP> c/o george@aimmi.UUCP (George Weir) writes:
>... if you have a system working which
>manages dialogue in of course natural langauge (complete with efficient
>interpreter/complier), and its able to cope with all known syntactic forms,
>as well as most semantics, please send me a copy...
My wife and I are currently working on such a system. The project name is
"SCOTT", which stands for "Self COmmunicating ToT." Our project has been
underway for just over three years now, not counting a nine month prototyping
period. Unfortunately, we are unable to post to the network...
Additionally, there are a few bugs, such as inappropriate case marking ("My
wanna go to the truck store!"), incorrect placement of negation ("My no wanna
go to sleep!"), "syllabic" metathesis ("You got for to buy me candy" = "You
←forgot← to..."), etc. We regard these as trivial problems, since the
problems which linguists acknowledge to be truly difficult (e.g. the semantics
of nonexistent entities, such as imaginary people that cause the breakage/
disappearance of objects, and such pragmatic issues as proper attachment of PPs
and extraposed relative clauses) appear to be well on the way to resolution.
We would also like to report that it has been great fun...
--
Mike Maxwell
Boeing Artificial Intelligence Center
...uw-beaver!uw-june!bcsaic!michaelm
------------------------------
Date: Mon 24 Feb 86 09:20:59-PST
From: Tom Garvey <Garvey@SRI-AI.ARPA>
Subject: Re: Review - SI Interactions, 2/86
It sounds as if TI for all their investment in AI, has made progress
toward the partial solution of two problems. Since this is about the
average number of examples required for receiving a Ph.D. in AI, they
seem to have partially fulfilled the requirements.
Clearly, expertise in AI marketing is what students should be striving
for today -- the state-of-the-art of the technology itself is of (at
best) secondary importance.
Cheers,
Tom
------------------------------
Date: Mon, 17 Feb 86 12:30:41 pst
From: decwrl!pyramid!hplabs!tektronix!uw-beaver!ssc-vax!bcsaic!pamp
@ucbvax.berkeley.edu
Subject: Re: Cognitive Psychology - Knowledge Structures
In article <8602100723.AA28871@ucbvax.berkeley.edu> you write:
>From: THOMPSON%umass-cs.csnet@CSNET-RELAY.ARPA
>
> I am looking for information about the knowledge structure
> differences of people who have different levels of expertise
> in a subject. For example, what is the difference in the
> knowledge structure of an "apprentice", a "journeyman",or a
> "master".
>
> Roger Thompson
> Thompson@UMASS
One that I can recommend right off hand is -
Kolodner,Janet L.,1984,Towards an understanding of the role of
experience in the evolution from novice to expert:
in Developments in expert systems;M.J.Coombs,ed.;
Academic Press,p.95-116.
You might also look into Schank's work
Schank,R.C.,1982, Dynamic Memory:A thoery of learning in
people and computers; Cambridge University Press,
Cambridge.
P.M.Pincha-Wagener
------------------------------
Date: Sun, 23 Feb 86 18:00:35 est
From: Valerie Kierulf <ulysses!mcnc!unc!kierulfv@ucbvax.berkeley.edu>
Subject: Re: Expert Systems and Software Engineering
Jeg kan ikke hjelpe deg, men etter det som jeg ser, leser og hoerer, har
folkne som driver paa med AI aldri hoert noe om Software Engineering!
Jeg ville vaere veldig glad aa hoere av det motsatte !!!!
Translation: I cannot help you. But after all I see, read and hear, people
that have to do with AI don't know about the existence of Software
Engineering. I would be very glad to hear the opposite !!!!!!
Valerie Kierulf
------------------------------
Date: Fri, 21 Feb 86 18:12:28 est
From: Rob Jacob <jacob@nrl-mms.ARPA>
Subject: Expert Systems and Software Engineering
Saw your message about software engineering for expert systems on the
AIList...glad you asked.
Here at the Naval Research Laboratory Judy Froscher and I are trying to
work on just this problem. We are interested in how rule-based systems
can be built so that they will be easier to change. Our basic solution
is to divide the set of rules up into pieces and limit the connectivity
of the pieces.
I, too, would be very interested to hear about any other work in this
area. When we describe our work to people, we often hear "That is just
what we need...why isn't somebody working on this?" But we do not often
hear about other people actually working on this problem. Two you might
try are Gregg Vesonder at Bell Labs and Steve Fickas at University of
Oregon.
I'm going to attach a short abstract about our work to the end of this
message and some references.
Good luck,
Rob Jacob
ARPA: jacob@nrl-css
UUCP: ...!decvax!nrl-css!jacob
SNAIL: Code 7590, Naval Research Lab, Washington, D.C. 20375
Developing a Software Engineering Methodology for Rule-based Systems
Robert J.K. Jacob
Judith N. Froscher
Naval Research Laboratory
Washington, D.C.
Current expert systems are typically difficult to change once they are built.
The objective of this research is to develop a design methodology that will
make a knowledge-based system easier to change, particularly by people other
than its original developer. The basic approach for solving this problem is
to divide the information in a knowledge base and attempt to reduce the
amount of information that each single programmer must understand before he
can make a change to the expert system. We thus divide the domain knowledge
in an expert system into groups and then attempt to limit carefully and
specify formally the flow of information between these groups, in order to
localize the effects of typical changes within the groups.
By studying the connectivity of rules and facts in several typical rule-based
expert systems, we found that they seem to have a latent structure, which can
be used to support this approach. We have developed a methodology based on
dividing the rules into groups and concentrating attention on those facts
that carry information between rules in different groups. We have also
studied several algorithms for grouping the rules automatically and for
measuring coupling and cohesion of alternate rule groupings in a knowledge
base.
REFERENCES
J.N. Froscher and R.J.K. Jacob, "Designing Expert Systems for Ease of
Change," Proc. IEEE Symposium on Expert Systems in Government, Washington,
D.C., pp. 246-251, 1985.
R.J.K. Jacob and J.N. Froscher, "Developing a Software Engineering
Methodology for Rule-based Systems," 1985 Conference on Intelligent Systems
and Machines, Oakland University, 1985.
------------------------------
Date: Thu, 20 Feb 86 15:45:31 pst
From: Bruce McHenry <bruce@sri-tsc.ARPA>
Subject: The Community Authoring Project
[Forwarded from the AI-Ed distribution by Laws@SRI-AI.]
A New R&D Program: The Community Authoring Project (CAP)
The goal of the CAP is to provide a system which a large number
of people can use to create and store a complex body of knowledge.
Such a body, because it is authored and edited by many people, will
address a wide variety of individual perspectives. Individuals will be
guided through this body with the help of user agents. The user agents
will correspond with "idea" agents which monitor the formation of
communities. While this approach applies to information and management
systems in general, the CAP aims to develop prototypes which can be
used in leading universities over the next few years. Such
universities will posess advanced workstations upon which CAP software
may run. The resulting community information system should provide
immediate benefits to teachers and students who may use it to create,
either alone or in conference, multimedia (visual & aural) "sections".
Sections may be embedded in eachother and interactively created,
explored and manipulated. CAP technology will enable communities to
create broadbased bodies of knowledge in ways such that the
individual's "question in mind" can be readily addressed. The testbed
sites will also provide attractive cultures for research into AI (i.e.
knowledge based, natural language and self-organizing) systems. However,
the CAP's design philosophy is based on a pragmatic view of common
human methods for locating and disseminating information. Its basis in
community participation provides a radical departure from current
methods of authoring interactive materials and it is expected that the
CAP will dramatically influence the development of interactive media
such as digital compact discs.
Bruce McHenry
------------------------------
End of AIList Digest
********************
∂27-Feb-86 1407 LAWS@SRI-AI.ARPA AIList Digest V4 #39
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 Feb 86 14:03:34 PST
Date: Thu 27 Feb 1986 09:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #39
To: AIList@SRI-AI
AIList Digest Thursday, 27 Feb 1986 Volume 4 : Issue 39
Today's Topics:
Seminars - Hierarchical Planning and Allocation (USC) &
Cerebral Lateralization (UCB) &
Off-Line Programming of Robots (UPenn) &
The Limits of Calculative Rationality (SU) &
Intelligent Concept Design Assistant (Edinburgh) &
The Purposes of Vision (Edinburgh)
----------------------------------------------------------------------
Date: 26 Feb 1986 13:09-PST
From: usc-cse.usc.edu@gasser
Subject: Seminar - Hierarchical Planning and Allocation (USC)
USC DISTRIBUTED PROBLEM SOLVING GROUP MEETING:
Planning and Resource Allocation in Time- and Cost-Constrained
Environments : A Hierarchical Approach
Norman Sadeh
Ph.D. Student, CS Dept., USC
Wednesday, 3/5/86, 3:00 - 4:00 PM
Seaver 319
Real-life planners should be provided with an ability to allocate resources
in time and cost constrained environments. A flexible manufacturing system
is an example of such an environment.
We will describe a hierarchical approach to the problem of allocating
resources during the planning process. We believe that the concept of
resource is directly related to the level of detail of the plan. A same
object can be considered as a resource at a higher level of abstraction
and as a common object at a lower level. By allowing the planner to decide
upon which particular instances of certain high level resources to allocate
to some high level tasks, taking into account time and cost constaints
posted on the overall plan, we will drastically reduce the search space to
be investigated.
Both centralized and distributed approaches will be considered.
Questions: Dr. Les Gasser, CS Dept., USC (213) 743-7794 or
Norman Sadeh: sadeh@usc-cse.usc.edu
------------------------------
Date: Wed, 26 Feb 86 15:54:31 PST
From: admin%cogsci@BERKELEY.EDU (Cognitive Science Program)
Subject: Seminar - Cerebral Lateralization (UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar -- IDS 237B
Tuesday, March 4, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``COGNITIVE MODELS OF HUMAN CEREBRAL LATERALIZATION:
A TUTORIAL REVIEW''
Curtis Hardyck
Department of Psychology and School of Education,
University of California at Berkeley
Models of human cerebral functioning have ranged from
notions of extreme anatomical specificity to beliefs in global
functioning.
Within the field of cerebral lateralization, opinions have
ranged from positions favoring extreme lateralization (almost
all functions localized in one hemisphere) to bilateralization
(almost all functions existing in both hemispheres). Intermin-
gled with these positions have been promulgations of hemispher-
icity as polar opposites, e.g. right brain (creative insight-
fulness) vs left brain (lackluster drudgery), which have been
adopted into popular culture.
I will provide a brief historical review of this problem
and a discussion of current cognitive models of lateralization
appropriate for examination within a cognitive science frame-
work.
------------------------------
Date: Wed, 26 Feb 86 12:40 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Off-Line Programming of Robots (UPenn)
Colloquium
3pm Thursday, February 27, 1986
216 Moore School, University of Pennsylvania
TOPICS IN THE OFF-LINE PROGRAMMING OF ROBOTS
Vincent Hayward
Computer Vision and Robotics Lab., McGill University
Programming robots is a difficult task, even in the case of the simplest
applications. For this reason, research in robot programming has been evolving
in two distinct directions. The first one is aimed at constructing goal driven
automated robot programming systems. Another trend is to design so-called
off-line programming systems to ease the work of a human robot programmer.
These systems include a set of programming aids such as graphic facilities,
reporting of performances, interfaces to CAD/CAM systems, and pleasant user
interfaces. In the view of developing off-line programming systems, I will
first present solutions to the problem of collision detection. These methods
belong to a continuum of schemes according to the method selected for
representing the workspace and the robot, and the amount of computations
performed before testing a particular trajectory. I will then discuss a method
based on a recursive decomposition of the workspace, also referred to as an
octree model, as a good tradeoff for a class of applications. I will then
present a project currently underway aimed at the construction of CAD models
from range data which will also facilitate the programming of robots. Finally,
I will discuss the adequacy of current robot programming primitives and propose
a new scheme based on how sensors interact with robot control systems.
------------------------------
Date: 26 Feb 86 1534 PST
From: Matthew Ginsberg <SJG@SU-AI.ARPA>
Subject: Seminar - The Limits of Calculative Rationality (SU)
In light of what I expect will be department-wide interest in the
following talk, this week's research meeting/seminar of the KSL will
instead be a department-wide event.
The talk will run from 12.05 until 1.00 on February 28 and will be held
in the Chemistry Gazebo. The room is fairly small, so anyone interested
in attending would be well advised to arrive early.
Matt Ginsberg
FROM SOCRATES TO EXPERT SYSTEMS: THE LIMITS OF
CALCULATIVE RATIONALITY
BY
Hubert L. Dreyfus
University of California
Berkeley
An examination of the general epistemological assumptions behind
Artificial Intelligence research with special reference to recent
work in the development of expert systems. All AI work assumes that
knowledge must be represented in the mind as symbolic descriptions.
Expert system builders further assume that expertise consists in
problem-solving and that problem-solving consists in analyzing a
situation in terms of objective features and then finding a situation-
action rule which determines what to do.
I will argue that expert system builders fail to recognize the real
character of expert intuitive understanding. Expertise is acquired
in a five-step process: The BEGINNER does, indeed, pick out objective
features and follow strict rules like a computer. The ADVANCED BEGINNER,
however, responds to meaningful aspects of the situation which are
recognized as similar to prototypical cases, without similarity being
analyzed into objective features. At the next stage, the COMPETENT
performer learns to figure out a strategy and to pay attention only
to features and aspects which are relevant to his plan. The fourth
stage, PROFICIENCY, is achieved when the performer no longer has to
figure out his strategy but immediately sees the appropriate strategy.
Finally, the EXPERT, after many years of experience, is able to do what
works without facing a problem and without having to make any logical
calculations. Experts presumably do this by storing many whole situations
and associated actions in memory and responding to their current situation
in terms of its overall similarity to a situation already successfully
dealt with.
On the basis of this model one can see that expert systems based
on rules extracted from experts do not capture the expert's expertise
and so cannot be expected to perform at expert level.
A review of the successes and failures of various expert systems confirms
this analysis.
------------------------------
Date: Thu, 27 Feb 86 10:43:01 GMT
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@cs.ucl.ac.uk>
Subject: Seminar - Intelligent Concept Design Assistant (Edinburgh)
EDINBURGH AI SEMINARS
Date: Wednesday, 26th February l986
Time: 2.00 p.m.
Place: Department of Artificial Intelligence
Seminar Room - F10
80 South Bridge
EDINBURGH.
Dr. K.J. MacCallum, Department of Ship & Marine Technology, University
of Strathclyde will give a seminar entitled - "An Intelligent Concept
Design Assistant".
This paper argues for the introduction of increased knowledge and
reasoning capabilities into computer based design systems in such a way
that they are able to enact the role of an intelligent assistant to the
designer. It is shown that concept design involves a number of
different types of knowledge, the most difficult of which to represent
in a computer is "worldly" knowledge, either physical or commonsense.
Two systems which are being developed to tackle aspects of this
problem are described. The first system, called DESIGNER, handles
numerical relationships; the second called SPACES is concerned with
representing spatial arrangements.
Keyords: Design, CAD, Knowledge Representation, Numerical
Relationships, Spatial Arrangements.
------------------------------
Date: Thu, 27 Feb 86 10:43:39 GMT
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@cs.ucl.ac.uk>
Subject: Seminar - The Purposes of Vision (Edinburgh)
EDINBURGH AI SEMINARS
Date: Wednesday, 5th March l986
Time: 2.00 p.m.
Place: Department of Artificial Intelligence, Seminar Room, Forrest
Hill, Edinburgh.
Professor Aaron Sloman, School of Social Sciences, University of Sussex
will give a seminar entitled - "The Purposes of Vision and the
Architecture of a Mind".
It is often taken for granted that the purpose of vision is to take in one or
two static or changing 2-D arrays of information about the current optic field
and produce descriptions of the 3-D objects from which the light has been
reflected. This treats the visual system as having a narrowly defined set of
inputs and outputs and encourages a conception of the visual system as a
separable module in an intelligent mechanism, with relatively few channels of
communication with other modules.
The talk will reflect on the variety of visual inputs and outputs, the
possibility of integration with other senses at different levels, and how
these relate to the different purposes to which vision can be put. One
implication seems to be that the visual system may have an architecture and
relationship to other mental processes, very different from what is normally
assumed. Might we sometimes see with our ears and hear with our eyes?
------------------------------
End of AIList Digest
********************
∂28-Feb-86 0102 LAWS@SRI-AI.ARPA AIList Digest V4 #40
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 Feb 86 01:02:00 PST
Date: Thu 27 Feb 1986 22:27-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #40
To: AIList@SRI-AI
AIList Digest Friday, 28 Feb 1986 Volume 4 : Issue 40
Today's Topics:
Queries - Lisp Books & Common Lisps & International Logo Exchange,
Knowledge Representation - Translation & Associative Memory,
Methodology - The Community Authoring Project & AI Taxonomy,
Literature - Scientific DataLink Index To AI Research 1954-1984
----------------------------------------------------------------------
Date: Wed, 26 Feb 86 12:35:22 CST
From: "Glenn O. Veach" <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Lisp in the classroom.
This past year at the University of Kansas we used Scheme in two
classes. In an undergraduate "Programming Languages" class we
went through Abelson and Sussman's book while using Scheme for
homework and class projects. In a graduate level "Artificial
Intelligence" class we went through Kowalski's book and assigned
a project to develop a Horne clause theorem prover which some
implemented using Scheme. We are now trying to a curriculum
for our "Introductory Programming" course in which we would use
MacScheme (we now use Pascal) and would use Abelson and Sussman
as a text (probably not the entire book). We would hope to use
the remaining chapters of the text for our second semester
programming course.
We are of course encountering some resistance as we try to forge
ahead with Lisp as a basic instructional language. I understand
that MIT uses Abelson and Sussman as the text for their first
course in programming languages. Do they cover the entire text?
What do they use for more advanced programming language courses?
Do any other schools have a similar curriculum? Has anyone
been involved with the review process of ACM or IEEE for CS or
ECE programs and suggested the use of Lisp as a basic language?
What are some of the more compelling arguments for and against
such an effort? If anyone could direct me to any B-Boards on
ARPA net which would be interested in such a discussion I would
appreciate it.
Glenn O. Veach
Artificial Intelligence Laboratory
Department of Computer Science
University of Kansas
Lawrence, KS 66045-2192
(913) 864-4482
veach%ukans.csnet@csnet-relay
------------------------------
Date: Thu, 27 Feb 86 16:44:28 est
From: nikhil@NEWTOWNE-VARIETY.LCS.MIT.EDU (Rishiyur S. Nikhil)
Subject: Public domain Common Lisps?
Prof. Rajeev Sangal of the Indian Institute of Technology, Kanpur, is looking
for implementations of Common Lisp in the public domain, running on any of
these machines:
Dec-10 running Tops-10
UNIX System III (with Berkeley enhancements)
IBM PC's running MSDOS
Are there any such implementations? If you have any information/opinions,
please reply to
nikhil@xx.lcs.mit.edu
Thanks in advance,
Rishiyur Nikhil
------------------------------
Date: 27 February 1986 13:44:31 EST THURSDAY
From: FRIENDLY%YORKVM1.BITNET@WISCVM.WISC.EDU ( Michael Friendly
Subject: International Logo eXchange
I am the North American field editor for a new Logo newsletter,
ILX, edited by Dennis Harper at UCSB and published by Tom Lough
of the National Logo Exchange, PO Box 5341, Charlottesville, VA
22905.
I write a bi-monthly column on Logo-like educational computing,
and am interested in hearing from people who are doing interesting
things which might be of interest to the international Logo
community. Please reply directly to FRIENDLY@YORKVM1.BITNET.
Applications of Logo to particular subject areas, advanced ideas,
list processing, metaphors for teaching Logo etc are of particular
interest.
I am also interested in developing a network forum for Logo workers,
perhaps going thru AI-ED or perhaps separate from it, and would
appreciate hearing from anyone of other nets, Bboards or conferences
in this area.
My background:
I am a cognitive psychologist doing work on knowledge structure
and memory organization, with interests toward the applied side,
and am developing empirical techniques for cognitive mapping --
graphic portrayal of an individual's knowledge for some domain.
I have written a book on Advanced Logo with applications in
AI, computational linguistics, mathematics, physics, etc. oriented
toward courses in Computer Applications in Psychology and as an
advanced Logo book in a Faculty of Education. It is due to appear
sometime in 86.
------------------------------
Date: 27 Feb 86 09:38:30 est
From: Walter Hamscher <hamscher@MIT-HTVAX.ARPA>
Subject: Knowledge Representation and Translation
Could anyone send to me or post to the net references on conversion of
knowledge from one representational structure to another? For example,
translating between frames and semantic nets would be of interest.
Well, here's a couple of obvious ones that you probably already know about:
* Brachman, R.J. On the Epistemological Status of Semantic Networks.
* Etherington, D.W. and R. Reiter. On Inheritance Hierarchies with Exceptions.
* Hayes, P.J. The Logic of Frames.
These can be found in Brachman & Levesque's `Readings in Knowledge
Representation', Morgan Kaufman 1985. Actually as I look through the
TOC, I realize that you probably should just get the book if you don't
have it. Lots of good stuff. Has an extensive partially annotated
bibliography too.
------------------------------
Date: 27 Feb 86 10:11:11 est
From: Walter Hamscher <hamscher@MIT-HTVAX.ARPA>
Subject: Associative Memory
Date: 24 Feb 86 22:59:07 GMT
One of the biggest problems AI'ers seem to be having with their machines is
one of data access. Now, a human [or other sentient life-form :-)] has a
large pool of experience (commonly refered to as a swamp) that he/she/it has
access to.
It is linked together in many obscure ways (as shown by word-association
games) so that for any given thought (or problem) there are a vast number
(usually) of (not-necessarily) connected replies.
Thinking of that swamp as a form of data-base, does the problem then boil
down to one of finding a path-key that would let you access all of the
cross-referances quickly?
It's not invalid but unfortunately it isn't new either. See any paper
on Frames. The power of a frame-organized database isn't that there
happen to be these defstructs called frames, it's in the fact that the
frames are all connected together -- it's indexing by relatedness (how
dense the connections have to be before you start to win is an open
question, but see Lenat's recent stuff on CYC in the recent issue of
AI Magazine). For background see Minsky (A Framework For Representing
Knowledge, 1975). See NETL (e.g. Fahlman, Representing Real-world
Knowledge, circa 1979, MIT Press). See Connection Machine literature
(e.g. The Connection Machine, Hillis, 1985, MIT press). If you want
to see the connection between AI KB's and traditional DBMS's covered
extensively, see `Proceedings of the Islamorada Workshop on Large
Scale Knowledge Base and Reasoning Systems' (Feb 85) chaired by
Michael Brodie, available (I think) from Computer Corporation of
America, Cambridge MA (617) 492-8860.
------------------------------
Date: Thu 27 Feb 86 13:33:56-PST
From: Tom Garvey <Garvey@SRI-AI.ARPA>
Subject: Re: The Community Authoring Project
While I would certainly not want to be viewed as a stifler of creative
urges, sometimes it seems that a little common-sense, reality,
engineering knowledge, ..., injected into our blue-skying would go a
long way toward setting feasible goals. What makes CAP (to which any
yahoo could presumably add his personal view of the world) anything
more than, say, a multimedia extension of this BBOARD?
Cheers,
Tom
------------------------------
Date: Fri, 21 Feb 86 11:51 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: AI Taxonomy
When Dave Waltz was overseeing the AI section of CACM, he developed
a rather extensive taxomomy of AI. I recall seeing it published
in AAAI magazine or SIGART or a similar source about 2 or 3 years ago.
[I believe that he developed it for Scientific Datalink and then
published it in AI Magazine. See the following message. -- KIL]
------------------------------
Date: Fri 21 Feb 86 09:42:28-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Scientific DataLink Index To AI Research 1954-1984
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
We have just added the four volume set of the Scientific DataLink Index To
Artificial Intelligence Research 1954-1984. The four volumes including two
abstract volumes, a subject volume, and an author index, are shelved with
the serial indexes. These volumes index the Scientific DataLink microfiche
collections for the following research institutions in AI: Bolt Beranek
and Newman, CMU, University of Illinois, ISI, University of Massachusetts,
MIT, University of Pennsylvania, University of Rochester, Rutgers, SRI,
Stanford AI and HPP, University of Texas Austin, Xerox Parc, and Yale.
The subject volume is based on the AI classification as published in AI
Magazine Spring 1985. I have included a photocopy of that article in
the back of the subject volume.
ACM is almost up-to-date with its ACM Guide To Computing Literature an
annual index to the computer science literature. We have received up to
1984 and the 1985 volume is expected to be out this summer. ACM expects
to have future annual volumes out by the summer of the following year
covered by the volume. This annual index not only includes all entries
from Computing Reviews Index but additional computer science articles
not included in the monthly Computing Reviews. Monographs, proceedings,
and journal articles are included in the index.
Harry Llull
------------------------------
Date: 19 Feb 86 17:09:00 GMT
From: hplabs!hp-pcd!orstcs!tgd@ucbvax.berkeley.edu (tgd)
Subject: Re: taxonomizing in AI: useless, harmful
Taxonomic reasoning is a weak, but important form of plausible reasoning.
It makes no difference whether it is applied to man-made or naturally
occurring phenomena. The debate on the status of artificial intelligence
programs (and methods) as objects for empirical study has been going on
since the field began. I assume you are familiar with the arguments put
forth by Simon in his book Sciences of the Artificial. Consider the case of
the steam engine and the rise of thermodynamics. After many failed attempts
to improve the efficiency of the steam engine, people began to look for
an explanation, and the result is one of the deepest theories of modern
science.
I hope that a similar process is occurring in artificial intelligence. By
analyzing our failures and successes, we can attempt to find a deeper theory
that explains them. The efforts by Michalski and others (including myself)
to develop a taxonomy of machine learning programs is viewed by me, at
least, not as an end in itself, but as a first step toward understanding the
machine learning problem at a deeper level.
Tom Dietterich
Department of Computer Science
Oregon State University
Corvallis, OR 97331
dietterich@oregon-state.csnet
------------------------------
End of AIList Digest
********************
∂28-Feb-86 1313 LAWS@SRI-AI.ARPA AIList Digest V4 #41
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 Feb 86 13:10:10 PST
Date: Fri 28 Feb 1986 09:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #41
To: AIList@SRI-AI
AIList Digest Friday, 28 Feb 1986 Volume 4 : Issue 41
Today's Topics:
Query - CERES and CASCADE Projects,
Literature - Prolog Books & Lisp & Dreyfus on Skill Acquisition,
Philosophy - The Dreyfus Controversy
----------------------------------------------------------------------
Date: 26 Feb 86 20:07:41 GMT
From: hplabs!turtlevax!weitek!kens@ucbvax.berkeley.edu (Ken Stanley)
Subject: Request for info on CERES and/or the CASCADE project
Can anyone tell me anything about CERES, POLO, LASCAR or the CASCADE project?
Is the CASCADE project state of the art or just an effort to catch up
to work in the U.S.?
I know nothing about any of the above. Hence, simple responses and
references would be the most helpful.
Ken Stanley weitek!kens
------------------------------
Date: 28-Feb-1986 0843
From: kevin%logic.DEC@decwrl.DEC.COM (Kevin LaRue -- You can hack
anything you want with TECO and DDT)
Subject: Re: Prolog Books
``Introduction to Logic Programming''
Christopher John Hogger
Academic Press, Inc.
1984
ISBN 0-12-352092-4
------------------------------
Date: 28-Feb-1986 1129
From: kevin%logic.DEC@decwrl.DEC.COM (Kevin LaRue -- You can hack
anything you want with TECO and DDT)
Subject: Re: Lisp in the classroom.
Lisp is the language used in the undergraduate introductory course of the CS
curriculum at Syracuse University. In the past there wasn't a textbook for the
course; I believe that they are using Winston and Horn's ``Lisp'' now.
------------------------------
Date: Thu 27 Feb 86 23:34:38-PST
From: Sang K. Cha <ChaSK@SU-SUSHI.ARPA>
Subject: Dreyfus on Skill Acquisition
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
[...]
Actually, the five-stage developmental model of skill acquisition that
Hubert Dreyfus stressed in his talk abstract appears in the following
paper of Stuart Dreyfus :
"Formal Models vs Human Situational Understanding : Inherent Limitations
on the Modelling of Business Expertise,"
Office Technology and People,1(1982) 133-165
by Stuart Dreyfus, Dept of IE & OR, UC Berkeley
-- Sang
------------------------------
Date: 18 Feb 86 23:45:53 GMT
From: decwrl!glacier!kestrel!ladkin@ucbvax.berkeley.edu (Peter Ladkin)
Subject: Re: Re: "self-styled philosophers"
(ladkin on Dreyfus)
> > He is also a professional philosopher, holding a chair at
> > U.C. Berkeley. His criticisms of AI claims are thoroughly thought
> > through, with a rigor that a potential critic of his views would
> > do well to emulate. He has done AI great service by forcing
> > practitioners to be more self-critical. AAAI should award him
> > distinguished membership!
(benjamin)
> Baloney.
> [comments on Dreyfus on chess .....]
> It seems arrogant
> for him to reach conclusions about fields in which he is not
> accomplished. This applies to both chess and AI.
Before you cry *baloney*, how about addressing the issue?
As I pointed out, but you deleted, his major argument is that
there are some areas of human experience related to intelligence
which do not appear amenable to machine mimicry.
Do you (or anyone) think that this statement is obviously false?
(Negate it and see if that sounds right).
People reach (good and bad) conclusions about fields in which
they are not accomplished all the time. That's how AI got started,
and that's how computers got invented.
Why is it that people get so heated about criticism of AI that
they stoop to name-calling rather than addressing the points made?
(That question has probably also been asked by Dreyfus).
Peter Ladkin
------------------------------
Date: 20 Feb 86 04:27:50 GMT
From: tektronix!uw-beaver!uw-june!jon@ucbvax.berkeley.edu (Jon Jacky)
Subject: Re: Technology Review article
> (Technology Review cover says...)
> After 25 years Artificial Intelligence has failed to live up to its promise
> and there is no evidence that it ever will.
Most of the comment in this newsgroup has addressed the second clause in
this provocative statement. I think the first clause is more important, and
it is indisputable. The value of the Dreyfuss brothers' article is to
remind readers that when AI advocates make specific predictions, they are
often over-optimistic. Personally, I do not find all of the Dreyfuss'
speculations convincing. So what? AI work does not get funded
to settle philosophical arguments, but because the funders hope to derive
specific benefits. In particular, the DARPA Strategic Computing Program,
the largest source of funds for AI work in the country,
asserts that specific technologies (rule based expert systems, parallel
processing) will deliver specific results (unmanned vehicles that can
drive at 40 km/hr through battlefields, natural language systems with
10,000 word vocabularies) at a specific time (the early 1990's). One
lesson of the article is that people should regard such claims
skeptically.
Jonathan Jacky, ...!ssc-vax!uw-beaver!uw-june!jon or jon@uw-june
University of Washington
------------------------------
Date: 20 Feb 86 19:35:05 GMT
From: ihnp4!ihwpt!olaf@ucbvax.berkeley.edu (olaf henjum)
Subject: Re: "self-styled philosophers"
Is there any other kind of "lover of wisdom" than a "self-styled" one?
-- Olaf Henjum (ihnp4!ihwpt!olaf)
(and, of course, my opinions are strictly my own ...)
------------------------------
Date: 20 Feb 86 18:26:12 GMT
From: decvax!genrad!panda!talcott!harvard!bbnccv!bbncc5!mfidelma@ucbvax
.berkeley.edu (Miles Fidelman)
Subject: Re: Technology Review article
About 14 years ago Hubert Dreyfus wrote a paper titled "Why Computers Can't
Play Chess" - immediately thereafter, someone at the MIT AI lab challenged
Dreyfus to play one of the chess programs - which trounced him royally -
the output of this was an MIT AI Lab Memo titled "The Artificial Intelligence
of Hubert Dreyfus, or Why Dreyfus Can't Play Chess".
The document was hilarious. If anyone still has a copy, I'd like to arrange
a xerox of it.
Miles Fidelman (mfidelman@bbncc5.arpa)
------------------------------
Date: 20 Feb 86 18:28:27 GMT
From: amdcad!amdimage!prls!philabs!dpb@ucbvax.berkeley.edu (Paul Benjamin)
Subject: Re: Re: Re: "self-styled philosophers"
> (ladkin on Dreyfus)
> > > He is also a professional philosopher, holding a chair at
> > > U.C. Berkeley. His criticisms of AI claims are thoroughly thought
> > > through, with a rigor that a potential critic of his views would
> > > do well to emulate. He has done AI great service by forcing
> > > practitioners to be more self-critical. AAAI should award him
> > > distinguished membership!
> (benjamin)
> > Baloney.
> > [comments on Dreyfus on chess .....]
> > It seems arrogant
> > for him to reach conclusions about fields in which he is not
> > accomplished. This applies to both chess and AI.
>
> Before you cry *baloney*, how about addressing the issue?
> As I pointed out, but you deleted, his major argument is that
> there are some areas of human experience related to intelligence
> which do not appear amenable to machine mimicry.
> Do you (or anyone) think that this statement is obviously false?
> (Negate it and see if that sounds right).
>
> Why is it that people get so heated about criticism of AI that
> they stoop to name-calling rather than addressing the points made?
> (That question has probably also been asked by Dreyfus).
>
> Peter Ladkin
I DID address the issue. I deleted your reference because reproducing
entire postings leads to extremely large postings. But I am addressing
his argument about areas of human experience which supposedly will
never be amenable to machine implementation. My whole point, which I
thought was rather obvious, is that he conjures up examples which are
poorly thought out, and experiments which are poorly executed. Thus,
his entire analysis is worthless to any investigators in the field.
I would welcome any analysis which would point out areas which I should
not waste time investigating. I receive this sort of input occasionally,
in the form of "it is better to investigate this than that, for this reason"
and this is very helpful. I certainly don't love wasting time looking at
dead ends. If Dreyfus' work were carefully constructed, it could be very
valuable. But all I see when I read his stuff is vague hypotheses, backed
up with bad research.
So I am not calling him names. I am characterizing his research, and
therefore AM addressing the issue.
Paul Benjamin
------------------------------
Date: Sun, 23 Feb 86 18:21:59 PST
From: albert@kim.berkeley.edu (Anthony Albert)
Reply-to: albert@kim.berkeley.edu (Anthony Albert)
Subject: Re: Technology Review article
In article <8602110348.2860@redwood.UUCP>, ucdavis!lll-crg!amdcad!amd!hplabs!
fortune!redwood!rpw3@ucbvax.berkeley.edu (Rob Warnock) writes:
>
>
>+
>| The [Technology Review] article was written by the Dreyfuss brothers, who
>| claim... that people do not learn to ride a bike by being told how to do
>| it, but by a trial and error method that isn't represented symbolically.
>+
>
>Hmmm... Something for these guys to look at is Seymour Papert's work
>in teaching
>such skills as bicycle riding, juggling, etc. by *verbal* and *written* means.
>That's not to say that some trial-and-error practice is not needed, but that
>there is a lot more that can be done analytically than is commonly assumed.
The Dreyfuses (?) understand that learning can occur analytically and
consciously at first. But in the stages from beginner to expert, the actions
become less and less conscious. I imagine Mr. Warnock's juggling (mentioned
further on in the article) followed the same path; when practicing a skill,
one doesn't think about it constantly, one lets it blend into the background.
Anthony Albert
..!ucbvax!kim!albert
albert@kim.berkeley.edu
------------------------------
End of AIList Digest
********************
∂04-Mar-86 0222 LAWS@SRI-AI.ARPA AIList Digest V4 #42
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Mar 86 02:22:28 PST
Date: Mon 3 Mar 1986 23:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #42
To: AIList@SRI-AI
AIList Digest Tuesday, 4 Mar 1986 Volume 4 : Issue 42
Today's Topics:
Journal Issue - Computational Linguistics on the Lexicon,
Seminars - Representation/Estimation of Spatial Uncertainty (SRI) &
Propositional Temporal Logic for Programs (UCB) &
Automatic Proof of Godel's Theorem (UTexas) &
Belief Functions in Artificial Intelligence (GMR)
Conference - Data Engineering
----------------------------------------------------------------------
Date: Sun, 2 Mar 86 13:08:13 est
From: walker@mouton.ARPA (Don Walker at mouton.ARPA)
Subject: Journal Issue - Computational Linguistics on the Lexicon
CALL FOR PAPERS: Special issue of Computational Linguistics on the Lexicon
Antonio Zampolli, Nicoletta Calzolari, and Don Walker have been appointed
guest editors for a special issue of Computational Linguistics on the
lexicon. There is general agreement that the lexicon has been a
neglected area, and that current research is addressing problems of
importance for all aspects of natural language processing. The issue is
intended to make the community at large aware of these developments.
All papers submitted will be reviewed in the usual manner. The only
difference in procedure is that three (instead of five) copies should
be sent to James Allen (CL Editor), Department of Computer Science,
University of Rochester, Rochester, NY 14627, USA [acl@rochester.arpa];
one copy should be sent to Antonio Zampolli (CL Lexicon), Laboratorio
di Linguistica Computazionale CNR, Via della Faggiola 32, I-56100 Pisa,
ITALY [glottolo%icnucevm.bitnet@wiscvm.arpa]; and one copy to Don Walker
(CL Lexicon), Bell Communications Research, 445 South Street, MRE
2A379, Morristown, NJ 07960, USA [walker@mouton.arpa; walker%mouton
@csnet-relay; ucbvax!bellcore!walker]. Manuscripts should be received
by 31 August.
------------------------------
Date: Thu 27 Feb 86 12:15:35-PST
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - Representation/Estimation of Spatial Uncertainty (SRI)
REPRESENTATION AND ESTIMATION OF SPATIAL UNCERTAINTY
Randy Smith (SMITH@SRI-AI)
Robotics Lab, SRI International
11:00 AM, MONDAY, March 3
SRI International, Building E, Room EJ228 (new conference room)
Current work on a method for geometrical reasoning under uncertainty
will be presented. Such a reasoning component will be important to
planning systems for many robotic applications, including autonomous
navigation and industrial automation.
A general method will be described for estimating the values and
estimated errors in the relationship between objects whose locations
are represented by coordinate frames. The elements in the
relationship may be described by bounding intervals, or may be
described by means and covariances, if a statistical model is
available. The relationship between the frames (objects) may not be
explicitly given, but known only indirectly through a series of
spatial relationships, each with its associated error. This
estimation method can be used to answer such questions as whether a
camera attached to a robot is likely to have a particular object in
its field of view. More generally, this method makes it possible to
decide in advance if an uncertain relationship is known accurately
enough for some task to be accomplished, and if not, how much of an
improvement in locational knowledge a proposed sensing action will
provide. The calculated estimates agree very well with those from an
independent Monte Carlo simulation. The method presented can be
generalized to six degrees of freedom, and provides a practical means
of estimating the relationships (position and orientation) between
objects as well as the uncertainty associated with the relationship.
------------------------------
Date: 27 Feb 86 13:22:12 PST
From: CALENDAR@IBM-SJ.ARPA
Subject: Seminar - Propositional Temporal Logic for Programs (UCB)
IBM Almaden Research Center
650 Harry Road
San Jose, CA 95120-6099
CALENDAR
March 3, 1986 - March 7, 1986
Computer EXPRESSING INTERESTING PROPERTIES OF PROGRAMS
Science IN PROPOSITIONAL TEMPORAL LOGIC
Seminar P. Wolper, AT&T Bell Labs and Stanford University
Tues., Mar. 4 We show that the class of properties of programs
10:30 A.M. expressible in propositional temporal logic
B1-413 can be substantially extended if we assume
the programs to be data-independent.
Basically, a program is data-independent if its
behavior does not depend on the specific data it
operates upon. Our results significantly extend
the applicability of program verification and
synthesis methods based on propositional
temporal logic.
Host: M. Vardi
------------------------------
Date: Fri, 28 Feb 86 10:53:21 CST
From: Rose M. Herring <roseh@ratliff.UTEXAS.EDU>
Subject: Seminar - Automatic Proof of Godel's Theorem (UTexas)
University of Texas
Computer Sciences Department
COLLOQUIUM
SPEAKER: N. Shankar
University of Texas at Austin
TITLE: Checking the Proof of Godel's Incompleteness
Theorem with the Boyer-Moore Theorem Prover
DATE: Thursday, March 6, 1986
PLACE: WEL 3.502
TIME: 4:00-5:30 p.m.
There is a widespread belief that computer proof-checking
of significant mathematics is infeasible. We argue against this
by presenting a machine-checked proof of Godel's incompleteness
theorem, one of the greatest landmarks of mathematics. The proof
of this theorem was checked in a constructive logic with the
Boyer-Moore theorem prover. The proof demonstrates the essential
incompleteness of Cohen's axioms for hereditarily finite sets.
This was done by first formalizing a proof-checker for this log-
ic, extending it with derived inference rules, demonstrating the
representability of a Lisp Eval function by a predicate in this
logic, and then constructing an undecidable sentence. The state-
ment of the incompleteness theorem as proved, asserts that if the
undecidable sentence is either provable or disprovable, then it
is both provable and disprovable. This shows that the above ax-
iom system is either incomplete or inconsistent.
------------------------------
Date: Mon, 3 Mar 86 17:31 EST
From: Steve Holland <holland%gmr.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Belief Functions in Artificial Intelligence (GMR)
Seminar at General Motors Research Laboratories, Warren, Michigan:
Belief Functions in Artificial Intelligence
Prof. Glenn Shafer
University of Kansas
Lawrence, Kansas 66045
Thursday, March 6, 1986
ABSTRACT
The theory of belief functions, or the Dempster-Shafer theory, has
attracted wide interest as a tool for the management of uncertainty in
artificial intelligence.
What are the advantages and disadvantages of belief functions when they are
compared with numerical alternatives such as Bayesian probability and fuzzy
logic or with non-numerical alternatives such as default logic and the
calculus of endorsements? What are the current prospects for sensible use
of belief functions in expert systems?
In this talk, I will offer some general judgments on these questions. I
will emphasize the need for interactive tools for the construction of
probability arguments, and I will speculate on long-term possibilities for
probability judgment using man-made associateve memories.
-Steve Holland, Computer Science Department
------------------------------
Date: Thu 27 Feb 86 18:37:23-PST
From: Gio Wiederhold <WIEDERHOLD@SUMEX-AIM.ARPA>
Subject: Conference - Data Engineering
DATA ENGINEERING CALL-FOR-PAPERS
The Third International Conference on Data Engineering
Pacifica Hotel, Culver City (Los Angeles), California, USA
February 3-5, 1987 (Tutorials 2,6 February)
Sponsored by the IEEE Computer Society
SCOPE
Data Engineering is concerned with the role of data and knowledge
about data in the design, development, management, and utilization of
information systems. As such, it encompasses traditional aspects of
databases, knowledge bases, and data management in general. The
purpose of the third conference is to continue to provide a forum for
the sharing of experience, practice, and theory of automated data and
knowledge management from an engineering point-of-view. The
effectiveness and productivity of future information systems will
depend critically on improvements in their design, organization, and
management.
We are actively soliciting industrial contributions. We believe
that it is critically important to share practical experience. We
look forward to reports of experiments, evaluation, and problems
in achieving the objectives of information systems. Papers which
are identified as such will be processed, scheduled, and published
in a distinct track.
TOPICS OF INTEREST
o Logical and physical database design o Design of knowledge-based systems
o Data management methodologies o Architectures for data- and
o Distribution of data and information knowledge-based systems
o Performance Evaluation o Data engineering tools
o Expert systems applied to data o Applications
o Data Security
The days preceeding and following the conference will be exclusively
devoted to tutorials.
Additional mini-tutorials will be presented during the last evening
of the conference. A special DBMS vendor day will include short
DBMS-specific tutorials to acquaint attendees with current commercially
available products. Those interested in presenting tutorials should
contact the Tutorial Chairman by May 15, 1986.
AWARDS, STUDENT PAPERS, AND SUBSEQUENT PUBLICATION:
An award will be given for the best paper at the conference. The best
student paper will receive the K.S. Fu award, honoring one of the
early supporters of the conference. Up to three awards of $500 each
to help defray travel costs will be given for outstanding papers
authored solely by students. All outstanding papers will be
considered for publication in the IEEE Computer Society Computer
Magazine, the IEEE Expert Magazine, the IEEE Software, and the IEEE
Transactions on Software Engineering. For more information, contact
the General Chairman.
PAPER SUBMISSION: CONFERENCE TIMETABLE:
Four copies of papers should be Tutorial proposals due: May 15, 1986
mailed before June 16th 1986 to: Manuscripts due: June 15, 1986
Acceptance letters sent: September 15, 1986
Third Data Engineering Conference Camera-ready copy due: November 11, 1986
IEEE Computer Society Tutorials: February 2,6, 1987
1730 Massachusetts Ave. NW Conference: February 3-5, 1987
Washington DC, 20036-1903
(202) 371-0101
Committee
Steering Committee Chairman:
C. V. Ramamoorthy
University of California, Berkeley, CA 94720
Honorary Chairman:
P. Bruce Berra
Syracuse University, Syracuse, NY 13210
General Chairman:
Gio Wiederhold
Dept. of Computer Science
Stanford University, Stanford, CA 94305
(415) 723-0685
wiederhold@sumex-aim.arpa
Program Chairman:
Benjamin W. Wah
Coordinated Science Laboratory
University of Illinois, Urbana, IL 61801
(217) 333-5216
wah%uicsld.@uiuc.arpa
Program Co-Chairpersons:
John Carlis, Univ.of Minnesota, Minneapolis, MN 55455
Iris Kameny, SDC, Santa Monica, CA 90406
Peter Ng, Univ.of Missouri-Columbus, Columbia, MO 65211
Winston Royce, Lockheed STC, Austin, TX 78744
Joseph Urban, Univ.of SW Louisiana, Lafayette, LA 70504
International Coordination:
Tadeo Ichikawa, Hiroshima University, Higashi-Hiroshima 724, Japan
G. Schlageter, Fern Universitat, D 5800 Hagen, FR. Germany
Tutorials:
James A. Larson, Honeywell Computer Sciences Center
1000 Boone Avenue North, Golden Valley, MN 55427
(612) 541-6836
jalarson@hi-multics.arpa
Awards:
K.H. Kim, University of South Florida, Tampa, FL 33620
Treasurer:
Aldo Castillo, TRW, Redondo Beach, CA 90278
Local Arrangements:
Walter Bond, Cal State University, Dominquez Hills, CA 90747
(213) 516-3580/3398
Mary C.~Graham, Hughes, P.O.Box 902, El Segundo, CA 90245
(213) 619-2499
Publicity:
Dick Shuey, 2338 Rosendale Rd., Schenectady, NY 12309
shuey@ge-crd.arpa
Tentative Program Committee Members
Jacob Abraham Witold Litwin
Adarsh K. Arora Jane W.S. Liu
J.L. Baer Ming T. (Mike) Liu
Faroh B. Bastani Raymond Liuzzi
Don Batory Vincent Lum
Bharat Bhargava Yuen-Wah Eva Ma
Joseph Boykin Mamoru Maekawa
Richard Braegger Gordon McCalla
Alfonso Cardenas Toshimi Minoura
Nick Cercone N.M. Morfuni
Peter P. Chen Jack Mostow
Bernie Chern Jaime Murow
Roger Cheung Sham Navathe
David Choy Philip M. Neches
Wesley W. Chu Erich Neuhold
J. DeJong G.M. Nijssen
David J. DeWitt Ole Oren
Ramez ElMasri G. Ozsoyoglu
Robert Epstein Z.Meral Ozsoyoglu
Michael Evangelist C. Parent
Domenico Ferrari J.F. Paris
Hector Garcia-Molina D.S. Parker
Georges Gardarin Peter Rathmann
Sakti P. Ghosh Lakshmi Rebbapragada
Arnold Goldfein David Reiner
Giorgio Gottlob Gruia-Catalin Roman
Laura Haas Domenico Sacca
Lee Hollaar Giovanni Maria Sacco
Yang-Chang Hong Sharon Salveter
David K. Hsiao Edgar Sibley
H. Ishikawa David Spooner
Sushil Jajodia John F. Sowa
Jie-Yong Juang Peter M. Stocker
Arthur M. Keller Stanley Su
Larry Kerschberg Denji Tajima
Won Kim Marjorie Templeton
Roger King A.M. Tjoa
Dan Kogan Yosihisa Udagawa
Robert R. Korfhage Susan Urban
Tosiyasu L. Kunii P. Valduriez
Winfried Lamersdorf R.P. VanDeRiet
Matt LaSaine Yann Viemont
W.-H. Francis Leung Neil Walker
Victor Li Helen Wood
Ya-Nan Lien S. Bing Yao
Epilog
The correct design and implementation of data systems requires attention
to principles from databases, knowledge bases, software engineering, and
system evaluation. We hope you will participate.
------------------------------
End of AIList Digest
********************
∂04-Mar-86 0435 LAWS@SRI-AI.ARPA AIList Digest V4 #43
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Mar 86 04:34:57 PST
Date: Tue 4 Mar 1986 00:12-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #43
To: AIList@SRI-AI
AIList Digest Tuesday, 4 Mar 1986 Volume 4 : Issue 43
Today's Topics:
Queries - Lexicons & Q&A & MAC-SCHEME &
Distributed Problem Solving for Architectural Design &
Chinese Language Environment on Symbolics,
AI Tools - Lisp for 68k Unix World & Rete Algorithm,n
Linguistics - Ambiguous Sentences
----------------------------------------------------------------------
Date: 1986 Mar 3 08:28 EST
From: Bob Weber <WEBER3%HARVARDA.BITNET@WISCVM.WISC.EDU>
Subject: QUERY RE PUBLIC AND PRIVATE LEXICONS
I am currently evaluating available lexicons as part of a project
to develop a NLP system with commercial potential.
I would like information concerning machine-readable lexicons
and thesauri that are not now commercial products but that are publically
or privately available. Specifically, I am interested in the
following information: (1) number of words and how they were
selected for inclusion in the lexicon, (2) how much and what
kind of syntactical information is incorporated, (3) for verbs, whether
case information is included, and if so, what kind and to what extent,
(4) whether the lexicon incorporates any class hierarchy information,
(5) references to research using the lexicon, (6) the willingness
of the owner to share or sell, and approximate price if for sale,
(7) other descriptive information necessary for evaluating the
contents of the lexicon.
Please reply directly to: Weber3%Harvarda.BITNET@WISCVM.WISC.EDU
Thanks in advance. If the replies are sufficiently interesting,
I will repost.
------------------------------
Date: 27 Feb 86 20:21:17 GMT
From: decvax!genrad!panda!talcott!harvard!seismo!rochester!kodak!bayers
@ucbvax.berkeley.edu (mitch bayersdorfer)
Subject: Query: Q & A by Symantecs
On the IEEE telecast on February 26, 1986, there was mention of
a natural language driven database program called Q & A. Does
anyone know of the source of this package?
- Mitch Bayersdorfer
Applied Technology Organization
Artificial Intelligence Laboratory
Floor 4, Bldg 23, Kodak Park
Rochester, NY 14650
(716) 477-1972
UUCP: rochester!kodak!bayers
------------------------------
Date: Sun, 2 Mar 86 19:55:34 pst
From: Harvey Abramson <abramson%ubc.csnet@CSNET-RELAY.ARPA>
Subject: information on MAC-SCHEME
Does anyone have information as to the existence and availability of an
implementation of Scheme to run on the Macintosh?
------------------------------
Date: 3 Mar 1986 22:15-PST
From: hinke@usc-cse.usc.edu
Subject: distributed problem solving query -- architecture
I am currently researching the application of distributed problem
solving techniques to the solution of architecture (houses and
buildings) design problems. I am especially interested in any work in
which multiple agents, possessing different design perspectives, have
been applied to a design problem, While the domain is architecture, the
intent of the research is to investigate the computer science issues
inherent in multiple problem solver design approaches. Reply can be
sent to hinke@usc-cs.
Tom Hinke
------------------------------
Date: 1 Mar 1986 1801-EST (Saturday)
From: Andy Chun <hon%brandeis.csnet@CSNET-RELAY.ARPA>
Subject: Chinese Language Environment on Symbolics
We are currently developing a Chinese language environment on Symbolics Lisp
machines. This includes a basic character set of about 7,000 characters and
a user-interface for standard Chinese character code and pinyin input. This
environment will be used for Chinese natural language understanding research
and Chinese text-processing.
To avoid duplicating efforts, we would like to know if anyone has already
developed such an environment on a Symbolics machine. We are also
interested in knowing other research groups who may be interested in using
such an environment.
US mail:
Hon Wai Chun
Computer Science Department
Brandeis University
Ford Hall 232A
Waltham, MA 02254
------------------------------
Date: Thu, 27 Feb 86 19:48:00 pst
From: bellcore!decvax!decwrl!pyramid!hplabs!oblio!paf@ucbvax.berkeley.edu
(Paul Fronberg)
Subject: Re: seeking lisp for 68k unix world
You might try SCHEME from the GNU distribution tape. I brought it up on a
5.2 box (68020) by a minor modification of the makefile. Also the price is
right considering that this includes source code ($150).
------------------------------
Date: Mon, 3 Mar 86 10:07:13 PST
From: dual!hplabs!tektronix!tekchips!chanl@ucbvax.berkeley.edu (Chan Lee)
Subject: Re: Query -- Rete Algorithm
The Rete algorithm is described in detail on the article(by C. Forgy)
"Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Match
Problem", Artificial Intelligence, Vol 19, Num 1, Sep 1982.
You can find a lot of relevant papers in the reference of this paper.
Among them, McDermott, Newell and Moore's paper on the "Efficiency of certain
production system implementation" seems very helpful.
chan lee
------------------------------
Date: Thu 20 Feb 86 09:18:01-PST
From: FIRSCHEIN@SRI-AI.ARPA
Subject: ambiguous sentences
Here is the file of ambiguous sentences.
If you want to post any or all of it, be my guest.
From: BATES@G.BBN.COM
The all-time classic is "Time flies like an arrow", which has at least
5 ambiguous interpretations if you allow it to be the first part of
an unfinished sentence (which is how a parser would have to consider it) as
well as a complete sentence. The interps are:
1. The cliche we all understand the sentence to mean.
2. An imperative, as in "Take this stopwatch and time these flies the same way
you would time an arrow in flight."
3. An imperative, as in "Take this stopwatch and time these flies the same way
an arrow would time the flies if an arrow could use a stopwatch"
4. "Time flies (which are like Horse Flies or Bluebottle Flies) are fond
of an arrow"
5. "Time flies (as above), in a manner similar to an arrow, ..." (The end
of the sentence could be something like "move through the air rapidly")
There may even be another interp in there somewhere, but that's what I
remember for now. If you get other sentences that are that heavily
ambiguous, I would very much appreciate seeing a list of them.
Thanks,
Lyn Bates
BATES@BBNG.ARPA
From: Shrager.pa@Xerox.COM
Subject: multiple ambiguity
John made Jim die by swallowing his tongue.
E.g., John forced Jim's tongue down Jim's throat.
John ate Jim's tongue (the rudest version).
John swallowed his own tongue and Jim died laughing.
John ate the cow's tongue that Jim had tainted with hot peppers
so Jim died laughing. It was on John's plate.
<Same>, but it was on Jim's plate.
The tongue belongs to some third person (referent of "his").
From: FRAMPTON%northeastern.csnet@CSNET-RELAY.ARPA
The following is only four ways ambiguous, but the ambiguity is purely
syntatic and the sentence isn't overly contrived. It is a good test of
a syntatic parser.
"I sent the man who is too stubborn to talk to Jack."
The four readings can be deduced from:
(1) I sent X to Jack
(2) I sent X
(3) I sent X to talk to Jack
(4) I sent X Jack (dative shift)
Please either post the results of your inquiry on the AILIST or csnet-mail
the results to me. I'm quite curious.
From: Stephen G. Rowley <SGR@SCRC-STONY-BROOK.ARPA>
One classic example is the phrase "pretty little girls school". One
source of ambiguity is "pretty", which could mean either "beautiful" or
"moderately". However, most of the ambiguity comes from binding powers,
i.e., where you attach the adjectives. J. C. Brown, in his work on
Loglan, gives 17 meanings. Here they are, always interpreting "pretty"
as "beautiful".
P = pretty; L = little; G = Girls; S = school. The problem is how to
insert parentheses into P L G S. (Actually, it's more complex than
that, since you can put in a connective between adjectives to
effectively make a compound sentence; see [5ff]. Also, the some
adjectives can be present in both components of the compound; see
[9ff].)
Binding Meaning
======= =======
[1] (((P L) G) S) A school for girls who are small; the
smallness of the girls is beautiful. [This
is purely left-associative.]
[2] (P ((L G) S)) A school for girls who are small; the
speaker's opinion is that such schools are
beautiful. [Cf. [15].]
[3] ((P L) (G S)) A school for girls; the school is small and
the smallness is beautiful.
[4] (P (L (G S))) A school for girls; the school is small;
the speaker's opinion is that such schools
are beautiful. [This is purely
right-associative.]
[5] ((P G) S) & ((L G) S) A school for girls who are both beautiful
and small. [Both components left-associate.
G is duplicated.]
[6] (P (G S)) & ((L G) S) A school for girls; the school is pretty;
the girls are small. [First component
right-associates, second component
left-associates. G is duplicated.]
[7] ((P G) S) & (L (G S)) A school for pretty girls; the school is
also small. [First componentleft-associates,
second component right-associates. G is
duplicated.]
[8] (P (G S)) & (L (G S)) A school for girls; the school is both
pretty and small. [Both components
right-associate. G is duplicated.]
[9] ((P L) S) & ((P G) S) A beautifully small school for beautiful
girls. [Note duplication of P; both
components left-associate.]
[10] (P (L S)) & ((P G) S) A small school which is thought to be
pretty; also it's for pretty girls. [P
duplicated; association is right/left.]
[11] ((P L) S) & (P (G S)) A school which is small and whose smallness
the speaker considers beautiful; also a
school for girls which is itself pretty. [P
duplicated; association is left/right.]
[12] (P (L S)) & (P (G S)) A small school which is pretty; also a
school for girls which is pretty. [P
duplicated; both components
right-associate.]
[13] ((P L) S) & (G S) A school which is small and the speaker
considers that smallness to be beautiful;
also it's a school for girls.
[14] (P (L S)) & (G S) A small school which is beautiful and which
is a school for girls.
[15] (P S) & ((L G) S) A beautiful school which is for small girls.
[Unlike [2], the beauty of the school is
independent of L & G.]
[16] (P S) & (L (G S)) A pretty school which is for girls and small
as girls schools go.
[17] (P S) & (L S) & (G S) A school which enjoys all 3 properties of
being beautiful, small, and for girls.
[There's another set of 4 sentences that Brown didn't exhibit in his
book. They're of the same class as [5-8] and [9-12], but duplicate L
instead of P or G:
[18] ((P L) S) & ((L G) S)
[19] (P (L S)) & ((L G) S)
[20] ((P L) S) & (L (G S))
[21] (P (L S)) & (L (G S))
That brings the total to 21. However, since we're both getting bored
with this by now, and you've undoubtedly gotten the point, we won't
analyze them!]
One of Brown's points in Loglan was that, in order to be unambiguous,
the language needs pronounceable parentheses and connectives so that the
groupings above become apparent. Each of the 17 (or 21) above meanings
has a separate pronounciation in Loglan; you're not allowed to be vague
about binding of adjectives. (The default is left-associativity.)
One might object that I've left out cues to understanding, such as
punctuation (commas and apostrophes) and tone of voice. That's true;
many cues to understanding sentences like these come from lexical or
prosodic factors like that. However, tone of voice gets lost in writing
and punctuation is lost in speaking (at least partially; consider
"girls" vs "girl's"). Therefore, coping without some of these cues is
still a valid problem.
From: mab@aids-unix (Mike Brzustowicz)
My favorite is "The technician made the robot fast."
-Mike Brzustowicz
<mab@aids-unix>
From: William Dowling <Dowling%upenn.csnet@CSNET-RELAY.ARPA>
Re the recently posted question seeking multiply ambiguous
sentences: the easiest way to make multiply ambiguous sentences
or phrases is to exploit the tree inequality X(YZ) <> (XY)Z.
For example "a book and a stapler or some tape" is doubly
ambiguous, and "a book and a stapler or some tape and a newspaper"
is 5-ways ambiguous. The same trick makes "the man with a hat
and a monkey in pajamas" heavily ambiguous. Of course if n1 and
n2 are noun phrases k1- and k2-ways ambiguous then "<n1> is no <n2>"
is a sentence that is k1.k2-ways ambiguous. Bob Wall once told
me that an early automatic translation program picked up many of
the readings of "Applicants who apply for licenses wearing shorts
From: Walter Hamscher <hamscher@MIT-HTVAX.ARPA>
There's always the old standby "I saw the man on the hill with the
telescope." This is used in Winston's textbook. I count six meanings.
From: John DeCarlo <M14051%mwvm@mitre.arpa>
My favorite is:
"Mary had a little lamb."
It supposedly has at least a dozen meanings, most of which I can't think
of off the top of my head, but I know it is in at least one of my textbooks.
Mary owned some meat from a young sheep
ate an actual live animal
had intercourse with
was accompanied by
...
John DeCarlo
<M14051%mwvm@mitre.arpa>
------------------------------
End of AIList Digest
********************
∂06-Mar-86 1244 LAWS@SRI-AI.ARPA AIList Digest V4 #44
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Mar 86 12:44:07 PST
Date: Wed 5 Mar 1986 22:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #44
To: AIList@SRI-AI
AIList Digest Thursday, 6 Mar 1986 Volume 4 : Issue 44
Today's Topics:
Seminars - Acquiring Language & Computer Lexicon Use (SD SIGART) &
Commonsense Knowledge in the TACITUS Project (SU) &
Hubert Dreyfus on Being and Time (MIT) &
Intelligent Distributed Operating Systems (USC) &
Delegation and Inheritance (MIT) &
Refinement of Expert System Knowledge Bases (CMU) &
Heuristic Search: Algorithms, Theory, and Learning (CMU) &
Brains, Behavior, and Robotics (CSLI) &
Situation Calculus Planning (SRI) &
The Perspective Concept in Computer Science (CSLI)
----------------------------------------------------------------------
Date: 4 March 1986 1604-PST (Tuesday)
From: gross@nprdc.arpa (Michelle Gross)
Subject: Seminars - Acquiring Language & Computer Lexicon Use (SD SIGART)
Subject: SD SIGART-NLP meetings--Last and Next
We've been meeting the first Monday of each month.
Last night's meeting (our 3rd) covered Dr. Bob La Quey's efforts to
write a program that acquires language by determining which
grammatical rules are needed to parse incrementally more complex
text. The main difficulty with his approach seems to be how to prevent
adding spurious rules when ungrammatical sentences sneak through.
Someone suggested attaching a reliability index to each rule. The
index would be based on how often the rule has successfully helped a
parse get through. (The hope is that the ad hoc rules for
ungrammatical input would have low index values).
We also discovered that the only given rule in the
grammar (S --> N V Terminator) prevented the program from creating a
rule to parse imperative sentences (S --> V). Mallory Selfridge's 1981
IJCAI paper ``A Computer Model of Child Language Acquisition'' provided
some of the impetus for Bob's work. His talk was entitled ``A Model of
Language Acquisition.''
Our next meeting will be April 7th. The topic will be the
lexicon--how we use it and how a computer can use it. I volunteered to
present some relevant linguistic and computational literature. I plan
to discuss how the lexicon is viewed in Transformational Grammar,
Lexical Functional Grammar, and Relational Grammar (I don't know enough
about GPSG to touch on that perspective). I plan to discuss Cherry's
paper on the UNIX tool PARTS (a program from the Writer's Workbench
that assigns parts of speech by rule). I would also like to discuss
the data structures used in various dictionary projects.
Can anyone provide pointers to such information for the OED
or Webster's projects? Any other references or abstracts
you can send would only enrich our provincial San Diegan
discussions! I have a 1982 IEEE article on PARTS and Cherry's
1978 paper--are there are more recent references?
For more information on the SIG, you may contact Ed Weaver at work at
(619) 236-5963. I'll forward any electronic responses on to him.
Thanks,
Michelle gross@nprdc.ARPA ...ihnp4!sdcsvax!sdcc6!ix713 (UUCP)
Navy Personnel R&D Center UCSD Linguistics, C-008
San Diego, CA. 92152-6800 La Jolla, CA. 92093
------------------------------
Date: 03 Mar 86 1042 PST
From: Vladimir Lifschitz <VAL@SU-AI.ARPA>
Subject: Seminar - Commonsense Knowledge in the TACITUS Project (SU)
Commonsense Knowledge in the TACITUS Project
Jerry R. Hobbs
Artificial Intelligence Center
SRI International
Thursday, March 6, 4pm
MJH252
In the TACITUS project for using commonsense knowledge in the
understanding of texts about mechanical devices and their failures, we
have been developing various commonsense theories that are needed to
mediate between the way we talk about the behavior of such devices and
causal models of their operation. Of central importance in this effort
is the axiomatization of what might be called ``commonsense
metaphysics''. This includes a number of areas that figure in virtually
every domain of discourse, such as granularity, scales, cycles, time,
space, material, physical objects, shape, causality, functionality, and
force. Our effort has been to construct core theories of each of these
areas, and then to define, or at least characterize, a large number of
lexical items in terms provided by the core theories. In this talk I
will discuss our methodological principles, such as aiming for the
maximum abstraction possible in order to accommodate metaphor and
analogy, and I will describe the key ideas in the various domains we are
investigating.
------------------------------
Date: Tue, 4 Mar 1986 20:35 EST
From: AGRE%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Seminar - Hubert Dreyfus on Being and Time (MIT)
Artificial Intelligence Seminar
Monday, March 10, 2:30pm
545 Technology Square
(MIT Building NE43)
7th Floor Playroom
WHY YOU SHOULD READ BEING AND TIME
Hubert L. Dreyfus
Philosophy Department
UC Berkeley
The beauty of artificial intelligence is that computation keeps you honest:
mistaken approaches will simply fail. I will argue that a diagnosis of
current difficulties in AI research can be found in the work of Martin
Heidegger. Heidegger's Being and Time isolates a number of assumptions of
Western philosophy which, though subtle and pervasive, are contradicted by a
careful account of the phenomenology of everyday activity. These
assumptions and their corollaries have been implicit (and sometimes
explicit) in most AI work since the field's beginnings. The task now is to
find a positive alternative. I will start by presenting some of the basic
concepts of Heidegger's phenomenology. But Heidegger's account of everyday
practices does not directly provide an alternative to traditional methods in
AI because it offers a description rather than a mechanizable explanation.
It is difficult to reason about the ways descriptions and explanations
constrain one another. Still, I will attempt a start by outlining the
virtues and failings of some new approaches, in particular those of the
connectionist movement.
------------------------------
Date: 4 Mar 1986 12:28-EST
From: gasser@usc-cse.usc.edu
Subject: Seminar - Intelligent Distributed Operating Systems (USC)
USC Distributed Problem Solving Group Meeting
Wednesday, 3/12/86 3:00 - 5:00 PM
Seaver Science 319
John Gieser, Ph.D. Student, USC, will speak on "'Intelligent'
Operating Systems for Distributed Computing".
ABSTRACT
Recent ideas from distributed problem solving (DPS) research appear
to have merit when used to acheive cooperation in open-ended
distributed computing systems (DCS). To use these techniques, the
DCS nodes are viewed as autonomous agents in a problem-solving
situation, with each node governed by an "intelligent" operating
system (IOS). This talk will focus on some ideas for providing the
structures and mechanisms needed in the IOS to handle problems
requiring cooperation such as distributed control, load
balancing/sharing, cooperating processes, etc.
Questions: Dr. Les Gasser, (213) 743-7794, or
John Gieser (gieser@usc-cse.usc.edu)
------------------------------
Date: Tue, 4 Mar 86 16:31 EST
From: Jonathan Connell <jhc@OZ.AI.MIT.EDU>
Subject: Seminar - Delegation and Inheritance (MIT)
[Forwarded from the MIT bboard by SASW@MC.LCS.MIT.EDU.]
Thursday , March 6 4:00pm Room: NE43- 8th floor Playroom
The Artificial Intelligence Lab
Revolving Seminar Series
Delegation And Inheritance:
Two Mechanisms for Sharing Knowledge in Object-Oriented Systems
Henry Lieberman
AI Lab, MIT
When a group of objects in an object oriented programming system shares
some common behavior, how can we avoid re-programming behavior in every
object that needs it? I will explore the consequences of two mechanisms
for sharing knowledge, Inheritance and Delegation, for expressiveness
and performance of object oriented languages.
Using Inheritance, behavior common to a group of objects is encoded in a
Class object, which contains procedures for responding to messages, and
the names of variables that the procedure may access. Each class may
create a set of Instances, which share the procedures of the class, but
may have their own private values for the variables. Subclasses may
extend classes by adding additional procedures and variables.
Another way of sharing behavior is Delegation, which views each object
as a prototype capable of creating new objects by copying or reference,
removing the distinction between classes and instances. General and
specialized objects communicate using message passing rather than a
"hard wired" mechanism. Communication patterns can be determined at
message reception time rather than at compile time or object creation
time. There is a time/space tradeoff between inheritance and
delegation, delegation permitting smaller objects at the cost of
increased message traffic.
------------------------------
Date: 20 February 1986 1450-EST
From: Betsy Herk@A.CS.CMU.EDU
Subject: Seminar - Refinement of Expert System Knowledge Bases (CMU)
Speaker: Allen Ginsberg, Rutgers University
Date: Wednesday, March 5
Time: 11:30 - 1:00
Place: 5409 WeH
Title: The automatic refinement of expert system knowledge bases
Knowledge base refinement involves the generation, testing, and
possible incorporation of plausible refinements to the rules in
a knowledge base with the intention of thereby improving the
empirical adequacy of an expert system, i.e., its ability to
correctly diagnose or classify the cases in its domain of expertise.
The first part of the talk is a theoretical explication of the
basic concepts involved in knowledge base refinement -- e.g., a
precise analysis of one sense in which a refinement may be said
to be plausible is given -- and includes an overview of the
strategic goals that must be addressed by any knowledge base
refinement system. As an illustration of the general theory,
the second part of the talk focuses on the SEEK2 system for
automatic knowledge base refinement. In the last part of the
talk a brief discussion of a metalanguage for the experimental
design of refinement systems is given.
------------------------------
Date: 27 February 1986 1153-EST
From: Betsy Herk@A.CS.CMU.EDU
Subject: Seminar - Heuristic Search: Algorithms, Theory, and Learning (CMU)
Speaker: Richard Korf, Asst. Prof., Comp. Sci. Dept., UCLA
Date: Friday, March 14
Time: 1:00 - 2:30
Place: 5409 Wean Hall
Title: Heuristic search: Algorithms, theory, and learning
Abstract:
This talk will cover three new research results in the area of heuristic
search. The first is a new algorithm, called Iterative-Deepening-A*, that is
asymptotically optimal in terms of solution cost, time, and space among all
admissible heuristic tree searches. In practice, it is the only known
algorithm that is capable of finding optimal solutions to the Fifteen
Puzzle. The second is a theory which unifies the treatment of heuristic
evaluation functions in single-agent problems and two-person games. The
theory is based on the notion of a heuristic as a function that is invariant
over optimal solution paths. Based on this theory, we performed some
experiments on the automatic learning of heuristic functions. Our program
was able to learn a set of relative weights for the different chess pieces
which is different from, but competitive with, the classical values.
------------------------------
Date: Wed 5 Mar 86 16:57:49-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - Brains, Behavior, and Robotics (CSLI)
[Excerpted from the CSLI Calendar by Laws@SRI-AI.]
Brains, Behavior, and Robotics
by James S. Albus
Discussion led by Pentti Kanerva (Kanerva@riacs.arpa)
12 noon, TINLunch, Ventura Hall Conference Room
THURSDAY, March 13, 1986
In 1950, Alan Turing wrote, ``We may hope that machines will
eventually compete with men in all purely intellectual fields. But
which are the best ones to start with? . . . Many people think that
a very abstract activity, like the playing of chess, would be best.
It can also be maintained that it is best to provide the machine with
the best sense organs that money can buy, and then teach it to
understand. . . . This process could follow the normal teaching of a
child. Things would be pointed out and named, etc. Again I do not
know what the right answer is, but I think that both approaches should
be tried.'' (Quoted by Albus on p. 5.)
``Brains, Behavior, and Robotics'' takes this ``Turing's second
approach'' to artificial intelligence, the first being the pursuit of
abstract reasoning. The book combines over a decade of research by
Albus. It is predicated on the idea that to understand human
intelligence we need to understand the evolution of intelligence in
the animal kingdom. The models developed are mathematical
(computational), but one of their criteria is neurophysiological
plausibility. Although the research is aimed at understanding the
mechanical basis of cognition, Albus also discusses philosophical and
social implications of his work.
------------------------------
Date: Wed 5 Mar 86 16:44:14-PST
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - Situation Calculus Planning (SRI)
SITUATION CALCULUS PLANNING IN BLOCKS AND RELATED WORLDS
John McCarthy (JMC@SU-AI)
Stanford University
11:00 AM, MONDAY, March 10
SRI International, Building E, Room EJ228 (new conference room)
This talk will present mainly ideas rather than completed work.
Situation calculus is based on the equation s' = result(e,s),
where s and s' are situations and e is an event. Provided
one can control the deduction adequately, this is a more powerful
formalism than STRIPS. Planning a sequence of actions, or more
generally, a strategy of actions to achieve a situation with
specified properties, admits a variety of heuristics which
whittle away at the problem. In many practical situations, these
heuristics, which don't guarantee a full solution but leave a
reduced problem, are sufficient. Humans appear to use many of them
and so should computer programs. The talk therefore will concern both
epistemological and heuristic aspects of planning problems.
------------------------------
Date: Wed 5 Mar 86 16:57:49-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - The Perspective Concept in Computer Science (CSLI)
[Excerpted from the CSLI Calendar by Laws@SRI-AI.]
SYSTEM DESCRIPTION AND DEVELOPMENT TALK
The Perspective Concept in Computer Science
12:15, Monday, March 10, Ventura Conference Room
Our topic next Monday (March 10) will be a continued discussion
(introduced by Jens Kaasboll) of the issues raised by Kristen Nygaard
in his talk about perspectives on the use of computers:
Regardless of definitions of ``perspective'', there exist many
perspectives on computers. Computers are regarded as systems, tools,
institutions, toys, partners, media, symbols, etc. Even so, there
exist system description languages but no tool, or institution, or
... languages. What do the other perspectives reflect, which make
them less attractive for language designers? Suggestive answer: The
system perspective is the definite computer science perspective in
which the processes inside the computers are regarded as the goal of
our work. Viewed through some of the other perspectives, the computer
is seen as a means for achieving ends outside the computer, i.e., the
needs of people using the computers.
------------------------------
End of AIList Digest
********************
∂06-Mar-86 1616 LAWS@SRI-AI.ARPA AIList Digest V4 #45
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Mar 86 16:16:21 PST
Date: Wed 5 Mar 1986 22:41-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #45
To: AIList@SRI-AI
AIList Digest Thursday, 6 Mar 1986 Volume 4 : Issue 45
Today's Topics:
Queries - Belief Theories for Uncertainties &
Non-Monotonic Reasoning/Probabilistic Reasoning & GCLISP &
Expert System Shell Software,
Linguistics - Ambiguous Sentence & Lexicons,
Logic Programming - Prolog Book
----------------------------------------------------------------------
Date: Tue 4 Mar 86 06:24:39-PST
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: Belief Theories for uncertainties
Can anyone supply pointers to articles on the Dempster-Shafer belief
theory work?
--ted
------------------------------
Date: 4 Mar 86 02:30:48 GMT
From: ucdavis!lll-lcc!lll-crg!seismo!harvard!bbnccv!bbncca!wanginst!malek
@ucbvax.berkeley.edu (Sharon Malek)
Subject: Non-Monotonic Reasoning/Probabilistic Reasoning for Expert Sys.
I'm looking for information on non-monotonic reasoning and probabilistic
reasoning techniques for expert systems, as part of my graduate assistant
assignment.
Any assistance would be appreciated. Please mail responses.
Thanks,
--
Sharon Malek malek@wanginst (Csnet)
Wang Institute of Graduate Studies wanginst!malek (UUCP)
Tyng Road, Tyngsboro, MA 01879 (617) 649-9731
------------------------------
Date: 27 Feb 86 22:03:18 GMT
From: tektronix!uw-beaver!ssc-vax!bcsaic!pamp@ucbvax.berkeley.edu (pam pincha)
Subject: GCLISP
This is a request for information from anyone
who has had the occasion to use Golden Common LISP
-- especially the version 2.0 called GCLISP 286
Developer system. We need to know if the system
works well on the IBM host; whether it response
is reasonable for systems larger than just toy demos;
how easy it is to use....etc.,etc.,etc.......
Basically, would you or would you not recommend it
and for what level of work would you recommend it,
and why (or why not)?
Please use mail to send you reply. I'll summarize
to the group if there is interest.
Thanks in advance,
P.M.Pincha-Wagener
PS. Comments on how well the scoping is handled in
this system would be of help also.
------------------------------
Date: Mon, 3 Mar 86 23:41:00 est
From: mayerk%UPenn-GradEd%upenn.csnet@CSNET-RELAY.ARPA
Subject: Searching for comments on expert system shell software...
I'm currently putting together an introductory course
on expert systems here at Penn, and I'm in need of
sage advice. Part of the course will involve using
some expert system shell for homework assignments.
The assignments will involve: Forward and backward
chaining, frames, CFs, contexts (maybe), and a final
project that will be a small prototypical system from
a selected list of subjects.
I've plowed through some of the hype from various
vendors, but I'd like more information from people
who have either used them, or used others that are
personal favorites.
If anyone is interested, I'll send out an appendix
of all of the responses I get.
Here is an uncut list from a database that I'm compiling:
Vendor Name Product Name
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←← ←←←←←←←←←←←←←←←←←←
Expert Systems International, Ltd. ES/P Advisor
ExperTelligence, Inc. ExperFacts
ExperLisp
ExperLisp-3600
ExperLisp-Talk
Exsys, Inc. Exsys Version 3.0
Human Edge Expert Ease
Expert Edge
Intelliware, Inc. Experteach
Jeffrey Perrone & Associates, Inc. Advisor
Ex-Tran
Expert Ease
EXSYS
Grid-Xpert
Insight
KDS Corporation KDS
Lithp Systems BV Daisy
Micro Data Base Systems/Marketing & Sales Guru
Radian Corporation RuleMaster
Silogic, Inc. The Knowledge Work Bench
Software Architecture and Engineering, Inc. KES II
Texas Instruments Arborist
TI Personal Consultant
You'll have to excuse me if the list seems a little "raw,"
but I thought that it unfair to omit anything until I hear
a little more. (Most of the above are unsuitable for my
needs, but in the interests of a wider community, comments
might be valuable.)
Send responses to:
mayerk%UPenn-Graded Kenneth Mayer
University of Pennsylvania
(215) 387-4751
------------------------------
Date: Tue 4 Mar 86 06:30:40-PST
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: Ambiguous sentences cont.
Yet another ambiguous sentence that I've run across in NLP classes
is:
"The host smiled as he turned on the electric spit."
I leave it to reader to generate the permutations...
--ted
------------------------------
Date: Wed, 5 Mar 86 15:34:14 est
From: Mark J. Norton
<bellcore!decvax!genrad!panda!teddy!mjn@ucbvax.berkeley.edu>
Subject: Re: QUERY RE PUBLIC AND PRIVATE LEXICONS
Although you mentioned that you are not intrested in commericial lexicons,
I would suggest you contact someone in AI R&D at Wang Laboratories. I
spent several years there working on these and can assure you that they are
quality lexicons containing all (and more) of the information you require.
The source of data is the Random House Unabridged Dictionary, to which
they own exclusive computer rights. They also have on-line lexicons dealing
with Legal Terms, Medical Terms, Scientific Terms, Roget's Thesaurus,
Place-Names, Translation Aids to French, German, Spanish, Italian, Japanese,
Chinese, Korean, and Arabic, British Spellings of words,
and other specialized lists. It is quite possible that Wang
might let you use their information in return for application and
consultation access. Send me mail if you would like to persue this, and
need specific contacts there.
Mark J. Norton, 59 New Estate Road, Littleton, MA 01460.
--
Mark J. Norton
{decvax,linus,wjh12,mit-eddie,cbosgd,masscomp}!genrad!panda!mjn
mjn@sunspot
------------------------------
Date: Mon, 3 Mar 86 11:47:33 pst
From: sdcsvax!sdcrdcf!polyslo!cburdor@ucbvax.berkeley.edu
(Christopher Burdorf)
Subject: Re: Prolog Books
I would recommend Logic for Problem Solving, by Robert Kowolski.
------------------------------
End of AIList Digest
********************
∂06-Mar-86 1919 LAWS@SRI-AI.ARPA AIList Digest V4 #46
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Mar 86 19:18:52 PST
Date: Thu 6 Mar 1986 16:09-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #46
To: AIList@SRI-AI
AIList Digest Friday, 7 Mar 1986 Volume 4 : Issue 46
Today's Topics:
Theory - Knowledge & Dreyfus & Turing Test
----------------------------------------------------------------------
Date: Sat 1 Mar 86 20:04:39-PST
From: Lee Altenberg <ALTENBERG@SUMEX-AIM.ARPA>
Subject: Alan Watts on AI
I thought Ailist readers might be interested in the following
excerpt from "Oriental Omnipotence" in THE ESSENTIAL ALAN WATTS:
We must begin by showing the difference between Western and
Eastern ideas of omniscience and omnipotence. A Chinese Buddhist poem
says:
You may wish to ask where the flowers come from,
But even the God of Spring doesn't know.
A Westerner would expect that, of all people, the God of Spring would
know exactly how flowers are made. But if he doesn't know, how can he
possibly make them? A buddhist would answer that the question itself is
misleading since flowers are grown, not made. Things which are made are
either assemblages of formerly separate parts (like houses) or
constructed by cutting and shaping from without inwards (like pots of
clay or images). But things which are grown formulate their own
structure and differentiate their own parts from within outwards. ...
If, then, the God of Spring does not make the flowers, how does
he produce them? The answer is that he does so in the same way that you
and I grow our hair, beat our hearts, structure our bones and nerves,
and move our limbs. To us, this seems a very odd statement because we
do not ordinarily think of ourselves as actively growing our hair in the
same way that we move our limbs. But the difference vanishes when we
ask ourselves just HOW we raise a hand, or just how we make a mental
decision to raise a hand. For we do not know-- or, more corectly, we do
know but we cannot describe how it is done in words.
To be more exact: the process is so innate and so SIMPLE that
it cannot be conveyed by anything so complicated and cumbersome as human
language, which has to describe everything in terms of a linear series
of fixed signs. This cumbersome way of making communicable
representations of the world makes the description of certain events as
complicated as trying to drink water with a fork. It is not that these
actions or events are complicated in themselves: the complexity lies in
trying to fit them into the clumsy instrumentality of language, which
can deal only with one thing (or "think") at a time.
Now the Western mind identifies what it knows with what it can
describe and communicate in some system of symbols, whether linguistic
or mathematical-- that is, with what it can think about. Knowledge is
thus primarily the content of thought, of a system of symbols which make
up a very approximate model or representation of reality. In somewhat
the same way, a newspaper photograph is a repesentation of a natural
scene in terms of a fine screen of dots. But as the actual scene is not
a lot of dots, so the real world is not in fact a lot of things or
"thinks".
The Oriental mind uses the term KNOWLEDGE in another sense
besides this-- in the sense of knowing how to do actions which cannot be
explained . In this sense, we know how to breathe and how to walk, and
even how to grow hair, because that is just what we do!
------------------------------
Date: Sat 1 Mar 86 20:10:32-PST
From: Stuart Russell <RUSSELL@SUMEX-AIM.ARPA>
Subject: Addressing some of Dreyfus' specific points
To address some of the actual content of Dreyfus' recent talk at Stanford,
delivered to an audience consisting mostly of AI researchers:
1) The discussion after the talk was remarkably free of strong dissent, for
the simple reason that Dreyfus is now making a sloppy attempt at a
cognitive model for AI, rather than making any substantive criticism of AI.
Had his talk been submitted to AAAI as a paper, it would probably have been
rejected as containing no new ideas and weak empirical backing.
2) The backbone of his argument is that human *experts* solve problems by
accessing a store of cached, generalized solutions rather than by extensive
reasoning. He admits that before becoming expert, humans operate just like
AI reasoning systems, otherwise they couldn't solve any problems and thus
couldn't cache solutions. He also admits that even experts use reasoning
to solve problems insufficiently similar to those they have seen before.
He doesn't say how solutions are to be abstracted before caching, and
doesn't seem to be aware of much of the work on chunking, rule compilation,
explanation-based generalization and macro-operator formation which has
been going on for several years. Thus he seems to be proposing a performance
mechanism that was proposed long ago in AI, acts as if he (or his brother)
invented it and assumes, therefore, that AI can't have made any progress yet
towards understanding it.
3) He proposes that humans access previous situations and their solutions
by an "intuitive, holistic matching process" based on "total similarity"
rather than on "breaking down situations into features and matching on
relevant features". When I asked him what he meant by this, he said
he couldn't be any more specific and didn't know any more than he'd said.
(He taped our conversation, so he can no doubt correct the wording.)
In the talk, he mentioned Roger Shepard's work on similarity (stimulus
generalization) as support for this view, but when I asked him how the
work supported his ideas, it became clear that he knew very little about it.
Shepard's results can be explained equally well if situations are
described in terms of features, but more importantly they only apply when
the subject has no idea of which parts of the situation are relevant to the
solution, which is hardly the case when an expert is solving problems. In
fact, the fallacy of analogical reasoning by total similarity (which is the
only mechanism he is proposing to support his expert phase of skilled
performance) has long been recognized in philosophy, and also more recently
in AI. Moreover, the concept of similarity without any goal context (i.e.
without any purpose for which the similarity will be used) seems to be
incoherent. Perhaps this is why he doesn't attempt to define what it means.
4) His final point is that such a mechanism cannot be implemented in a
system which uses symbolic descriptions. Quite apart from the fact that
the mechanism doesn't work, and cannot produce any kind of useful
performance, there is no reason to believe this, nor does he give one.
In short, to use the terminology of review forms, he is now doing AI but
the work doesn't contain any novel ideas or techniques, does not report
on substantial research, does not properly cite related work and does
not contribute substantially to knowledge in the field. If it weren't
for the bee in his bonnet about proving AI (except the part he's now doing)
to be fruitless and dishonest, he might be able to make a useful
contribution, especially given his training in philosophy.
Stuart Russell
Stanford Knowledge Systems Lab
------------------------------
Date: Sat, 1 Mar 86 14:23:45 est
From: Jeffrey Greenberg <green@ohio-state.ARPA>
Reply-to: green@osu-eddie.UUCP (Jeffrey Greenberg)
Subject: Re: Technology Review article
> re:
> Dreyfus' distinction between learning symbolically how to do a task
> and 'doing' the task...i.e. body's knowledge.
>
I agree with the Dreyfus brothers - the difficulty many AI people have
(in my opinion) is a fundamental confusion of
"knowledge of" versus "knowledge that."
------------------------------
Date: 28 Feb 86 02:37:13 GMT
From: hplabs!ames!eugene@ucbvax.berkeley.edu (Eugene Miya)
Subject: Re: Technology Review article (Deryfus actuall)
<1814@bbncc5.UUCP>
>
> About 14 years ago Hubert Dreyfus wrote a paper titled "Why Computers Can't
> Play Chess" - immediately thereafter, someone at the MIT AI lab challenged
> Dreyfus to play one of the chess programs - which trounced him royally -
> the output of this was an MIT AI Lab Memo titled "The Artificial Intelligence
> of Hubert Dreyfus, or Why Dreyfus Can't Play Chess".
>
> The document was hilarious. If anyone still has a copy, I'd like to arrange
> a xerox of it.
>
> Miles Fidelman (mfidelman@bbncc5.arpa)
Excuse the fact I reproduced all that above rather than digest it.
I just attended a talk given by Dreyfus (for the first time). I think
the AI community is FORTUNATE to have a loyal opposition following of
Dr. Dreyfus. In some defense, Dreyfus is somewhat kind to the AI
community (in constrast to some AI critics I know) for instance he does
believe in the benefit of expert systems and expert assistants.
Dreyfus feels that the AI community harped on the above:
Men play chess.
Computers play chess.
Dreyfus is a man.
Computer beat Dreyfus.
Therefore, computers can beat man playing chess.
He pointed out he sent his brother (supposedily captain of the Harvard
chess team at one time) and he beat the computer (we should write
his brother at UCB CS to verify this I supose).
While I do not fully agree with Dreyfus's philosophy or his
"methodology," he is a bright thinker and critic. [One point we
do not agree on: he believes in the validity of the Turing test,
I do not (in the way it currently stands).]
--eugene miya
NASA Ames Research Center
{hplabs,ihnp4,dual,hao,decwrl,allegra}!ames!aurora!eugene
eugene@ames-nas.ARPA
p.s. I would not mind seeing a copy of the paper myself. :-)
------------------------------
Date: 3 Mar 86 02:17:00 GMT
From: pur-ee!uiucdcs!uiucdcsp!bsmith@ucbvax.berkeley.edu
Subject: Re: "self-styled philosophers"
William James once wrote that all great theories go through three
distinct stages: first, everyone claims the theory is simply wrong,
and not worth taking seriously. Second, people start saying that,
maybe it's true, but it's trivial. And third, people are heard to
say that not only is it true and important, but they thought of it
first.
Here at the University of Illinois, it seems to be de rigeur
to laugh and deride Dreyfuss whenever his name comes up. I am
convinced the majority of these people have never read any of
Dreyfuss' work--however, this is unimportant to them (clearly I don't
mean everyone here). There are also those who spend a great deal of
time and effort rejecting everything Dreyfuss says. For example,
recently Dr. Buchanan (of Stanford) gave a lecture here. He purported
to be answering Dreyfuss, but in the great majority of cases agreed
with him (always saying something like, "Well, maybe it's true, but
who cares?"). It seems to me that, if Dreyfuss is so unimportant, it
is very strange indeed that so many people get so offended by
everything he says and does. Perhaps AI researchers ought to be less
sensitive and start encouraging this sort of interdisciplinary
activity. Perhaps then AI will move forward and finally live up to
its promise.
Barry Smith
------------------------------
Date: Wed, 5 Mar 86 15:38:08 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: A Tale for Marvin the Paranoid Android.
> From AIList Vol 4 # 33:-
> His main thesis is that there are certain human qualities and
> attributes, for example certain emotions, that are just not the
> kinds of things that are amenable to mechanical mimicry.
> ...
> Peter Ladkin
> From AIList Vol 4 # 41:-
> As I pointed out, but you deleted, his major argument is that
> there are some areas of human experience related to intelligence
> which do not appear amenable to machine mimicry.
> ...
> Peter Ladkin
Could these areas be named exactly? Agreed that there are emotional
aspects that cannot be programmed into a machine, what parts of the
``human experience related to intelligence'' will also remain out-
side of the machine's grip?
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
------------------------------
Date: Mon, 3 Mar 86 12:54:02 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: The Turing Test - A Third Quantisation?
The original basis for the Turing test was to see if it was possible
to distinguish, purely from a text, whether you were talking to a man
or woman. The extension of this, the Turing test itself, seeks to give
a criterion for deciding on whether or not a intelligent system is
"truly intelligent". A human asks questions and receives answers in
textual form. (S)he then has to decide if it is a machine behind the
screen or not.
Now, supposing a system has been built which "passes" the test. Why
not take the process one stage further? Why not try to design an
intelligent system which can decide whether *it* is talking to machine
or not?
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
------------------------------
End of AIList Digest
********************
∂10-Mar-86 1450 LAWS@SRI-AI.ARPA AIList Digest V4 #47
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Mar 86 13:53:22 PST
Date: Mon 10 Mar 1986 09:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #47
To: AIList@SRI-AI
AIList Digest Monday, 10 Mar 1986 Volume 4 : Issue 47
Today's Topics:
Article/Seminar - The TI Compact LISP Machine (Dallas ACM),
Seminars - Tools Beyond Technique (UCB) &
Knowledge and Action in the Presence of Faults (SU) &
Adaptive Networks (GTE) &
Stochastic Complexity (IBM-SJ) &
Updating Databases with Incomplete Information (SU) &
Parallel Architectures for Knowledge Bases (SMU),
Conference - 1987 Linguistics Institute
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Article/Seminar - The TI Compact LISP Machine (Dallas ACM)
ACM Dallas Chapter Meeting Notice
Speaker: Alfred Ricoomi
Senior Member, Technical Staff
Texas Instruments
Topic: The TI Compact LISP Machine
The February 17 issue of Aviation Week is devoted to the military
application of Artificial Intelligence. One article reports on the
development, at TI, of a military LISP machine. Mr. Riccomi will
describe the machine, its near term applications, and likely spin-offs
into the commercial world especially in the airline industry.
Place: INFOMART, 1950 nStemmons Freeway (at Oak Lawn)Room 7004
Date: Tuesday, March 11, 1986, 7:30 - 8:15
------------------------------
Date: 5 Mar 86 00:24:00 GMT
From: pur-ee!uiucdcs!marcel@ucbvax.berkeley.edu
Subject: Seminar - Tools Beyond Technique (UCB)
WHEN: 12:00 noon, Wednesday, March 5th
WHERE: Canterbury House,
University of Illinois at Urbana-Champaign
TOOLS BEYOND TECHNIQUE
Marcel Schoppers
Dept of Computer Science
U of Illinois at Urbana-Champaign
In this talk I will propose yet another way to characterize AI, but
one which I hope captures the intuitions of AI researchers: that AI is
the attempt to liberate tools/machines from absolute dependence on
human control. That done, I will suggest some achievements which should,
according to this characterization of AI, demonstrate the success of
AI work. Importantly, both the characterization and those crucial
achievements contain no comparison to human capabilities. I therefore
maintain that several contemporary arguments for and against the future
success of AI are at once fallacious and beside the point. Among others:
the AI community's claim that "brains are computers too" is hardly necessary
and certainly not scientific, while Weizenbaum's "maybe computers can think,
but they shouldn't" is self-defeating. On the issue of whether artificial
intelligence will ever be achieved I will not commit myself, but at least
my characterization provides a down-to-earth criterion.
A paper on this subject (in the socio-communications literature):
"A perspective on artificial intelligence in society" Communications 9:2
(december 1985).
------------------------------
Date: Thu 6 Mar 86 06:09:35-PST
From: Oren Patashnik <PATASHNIK@SU-SUSHI.ARPA>
Subject: Seminar - Knowledge and Action in the Presence of Faults (SU)
AFLB, 13-Mar-86 : Yoram Moses (MIT)
12:30 pm in MJ352 (Bldg. 460)
Knowledge, Common Knowledge, and Simultaneous Actions
in the Presence of Faults
We show that any protocol that guarantees to perform a particular
action simultaneously at all sites of a distributed system must
guarantee that the sites attain common knowledge of particular facts
when such an action is performed. We analyze what facts become common
knowledge at various points in the execution of protocols in a simple
model of a system in which processors are liable to crash. We obtain
a new protocol for Simultaneous Byzantine Agreement that is optimal in
all of its runs. That is, rather than achieving the worst case
behavior, every run of the protocol halts at the earliest possible
time, given the pattern in which failures occur. This may happen as
early as after two rounds. We characterize precisely what failure
patterns require the protocol to run for k rounds, 1<k<t+2,
generalizing and simplifying the lower bound proof for Byzantine
agreement. We also show a non-trivial simultaneous action for which
popular belief would suggest that t+1 rounds would be required in the
worst case, and use our analysis to design a protocol for it that
always halts in two rounds. This work sheds considerable light on many
heretofore mysterious aspects of the Byzantine Agreement problem. It
is one of the first examples of how reasoning about knowledge can be
used to obtain improved solutions to problems in distributed computing.
This is joint work with Cynthia Dwork of IBM Almaden.
------------------------------
Date: Thu, 6 Mar 86 14:24:29 est
From: Rich Sutton <rich%gte-labs.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Adaptive Networks (GTE)
Self-Organization, Memorization,
and Associative Recall of Sensory Information
by Brain-Like Adaptive Networks
Tuevo Kohonen, Helsinki University of Technology
The main purpose of thinking is to forecast phenomena that take place
in the environment. To this end, humans and animals must refer to a
complicated knowledge base which is somewhat vaguely called memory.
One has to realize the two main problem areas in a discussion of memory:
(1) the memory mechanism itself, and (2) the internal representations of
sensory information in the brain networks.
Most of the experimental and theoretical works have concentrated on the
first problem. Although it has been extremely difficult to detect memory
traces experimentally, the storage mechanism is theoretically still the
easier part of the problem. Contrary to this, it has been almost a
mystery how a physical system can automatically extract various kinds
of abstraction from the huge number of vague sensor signals. This paper
now contains some novel views and results about the formation of such
internal representations in idealized neural networks, and their
memorization. It seems that both of the above functions, viz. formation
of the internal representations and their storage, can be implemented
simultaneously by an adaptive, self-organizing neural structure which
consists of a great number of neural units arranged into a
two-dimensional network. A number of computer simulations are presented
to illustrate both the self-organized formation of sensory feature maps,
as well as associative recall of activity patterns from the distributed
memory.
When: March 14, 1:00 pm
Where: GTE Labs 3-131
Contact: Rich Sutton, Rich@GTE-Labs.CSNet, (617)466-4133
------------------------------
Date: 6 Mar 86 14:52:51 PST
From: CALENDAR@IBM-SJ.ARPA
Subject: Seminar - Stochastic Complexity (IBM-SJ)
IBM Almaden Research Center
650 Harry Road
San Jose, CA 95120-6099
CALENDAR
March 10 - 14, 1986
Computer STOCHASTIC COMPLEXITY AND THE MDL AND PMDL PRINCIPLES
Science J. J. Rissanen, IBM Almaden Research Center
Colloquium
Thurs., Mar. 13 There is no rational basis in traditional
3:00 P.M. statistics for the comparison of two models
Rear Audit. unless they have the same number of parameters.
Hence, for example, the important
selection-of-variables problem has a dozen or so
solutions, none of which can be
preferred over the others. Recently, inspired
by the algorithmic notion of complexity, we
introduced a new concept in statistics, the
Stochastic Complexity of the observed data,
relative to a class of proposed probabilistic
models. In broad terms, it is defined as the
least number of binary digits with which the
data can be encoded by use of the selected
models. The stochastic complexity also
represents the smallest prediction errors which
result when the data are predicted by use of the
models. Accordingly, the associated optimal
model represents all the statistical information
in the data that can be extracted with the
proposed models, and for this reason its
computation, which we call the MDL (Minimum
Description Length) principle, may be taken to
be the fundamental problem in statistics. In
this talk, we describe a special form of the MDL
principle, which amounts to the minimization of
squared "honest" prediction errors, and we apply
it to two examples of polynomial curve fitting
as well as to contingency tables. In the first
example, which calls for the prediction of
weight growth of mice, the degree of the MDL
polynomial agrees with the optimal degree,
determined in retrospect after the predicted
weights were seen. The associated predictions
also far surpass those made with the best
traditional statistical techniques. A
fundamental theorem is given, which permits
comparison of models in the spirit of the
Cramer-Rao inequality, except that the models
need not have the same number of parameters. It
also settles the issue of how the
selection-of-variables problem is to be solved.
Host: R. Arps
(Refreshments at 2:45 P.M.)
[...]
------------------------------
Date: Fri 7 Mar 86 17:33:40-PST
From: Marianne Winslett <WINSLETT@SU-SCORE.ARPA>
Subject: Seminar - Updating Databases with Incomplete Information (SU)
Updating Databases With Incomplete Information
--or--
Belief Revision is Harder Than You Thought
Marianne Winslett
PhD Oral
Area X Seminar
Margaret Jacks 352
Friday, March 14, 3:15 PM
Suppose one wishes to construct, use, and maintain a database of
knowledge about the real world, even though the facts about that world
are only partially known. In the database domain, this situation
arises when database users must coax information into and out of
databases in the face of missing values and uncertainty. In the AI
domain, this problem arises when an agent has a base set of beliefs
that reflect partial knowledge about the world, and then tries to
incorporate new, possibly contradictory knowledge into the old set of
beliefs. In the logic domain, one might choose to represent such a
database as a logical theory, and view the models of the theory as
possible states of the real world.
How can new information (i.e., updates) be incorporated into the
database? For example, given the new information that "b or c is
true," how can we get rid of all outdated information about b and c,
add the new information, and yet in the process not disturb any other
information in the database? The burden may be placed on the user or
other omniscient authority to determine exactly which changes in the
theory will bring about the desired set of models. But what's really
needed is a way to specify an update intensionally, by stating some
well-formed formula that the state of the world is now known to
satisfy and letting the database management system automatically
figure out how to accomplish that update.
This talk will explore a technique for updating databases containing
incomplete information. Our approach embeds the incomplete database
and the updates in the language of first-order logic, which we believe
has strong advantages over relational tables and traditional data
manipulation languages when information is incomplete. We present
semantics and algorithms for our update operators, and describe an
implementation of the algorithms. This talk should be accessible to
all who are comfortable with first-order logic and have a passing
acquaintance with the notion of database updates.
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Seminar - Parallel Architectures for Knowledge Bases (SMU)
Toward Computer Architectures for Database and Knowledge Base Processing
Computer Science and Engineering Seminar, Friday, March 14, 1986
Speaker: Lubomir Bic
University of California at Irvine
Location: 315SIC
Time: 3:00 PM
The importance of parallelism has been recognized in recent years and
a number of multiprocessor architectures claiming suitability to
intelligent data and knowledge base processing have been proposed.
The success of these architectures has been, in most cases, rather
modest. The message conveyed in this talk is that, in order to build
highly-parallel computer architectures, new models of computation
capable of exploiting the potential of large numbers of processing
elments and memory units must first be developed. To support this
claim, two such models-- one for processing queries in a
network-oriented database system and another for extracting
information from a logic-based knowledge representation system -- will
be outlined. Both models are based on the principles of asynchronous
data-driven computation, which eliminate the need for centralized
control and shared memory.
------------------------------
Date: Sat, 8 Mar 86 15:29:19 est
From: walker@mouton.ARPA (Don Walker at mouton.ARPA)
Subject: Conference - 1987 Linguistics Institute
1987 LINGUISTICS INSTITUTE
STANFORD UNIVERSITY
The 1987 Summer Institute of the Linguistic Society of America will be
hosted by the Linguistics Department of Stanford University, from June
29 to August 7, 1987. It is co-sponsored by the Association for
Computational Linguistics.
The theme of the Institute is "Contextual and Computational Dimensions
of Language", and is meant to reflect the ever-growing interest in
integrating theories of linguistic structure with theories of language
processing and models of how language conveys information in context.
The aim is to provide a forum in which it is possible to integrate a
variety of linguistic traditions, particularly linguistic theory,
computational linguistics, discourse analysis, psycholinguistics,
sociolinguistics, and artificial intelligence.
Several different kinds of courses and activities will be offered
during the six-week period of the Institute:
(i) A series of overview classes in the main subareas of
linguistics (six weeks, 3 units)
(ii) A series of one-week intensive classes intended to provide
background for the four-week courses and seminars below (June 29-July
3, 1 unit)
(iii) Four-week classes on topics related directly to the theme
of the Institute (July 13-August 7, 2 units)
(iv) Several seminars associated with research workshops will
run throughout the last four weeks. These can be taken for credit, as
part of the Stanford "directed research" program (subject to prior
approval of the workshop leader) (up to three units)
(v) A series of Wednesday lectures (e.g.,on the Synthesis of
Approaches to Discourse), involving Institute participants and invited
visitors
(vi) The Association for Computational Linguistics will hold its
annual meeting during the second week of the Institute (July 6-10).
1987 marks the first time in recent years that two consecutive
Institutes have been held with the same theme. This complementarity
of the 1986 Institute held at the City University of New York and the
1987 Institute reflects remarkable changes taking place today in the
field of linguistics. Taken together, the Institutes provide the
depth and diversity necessary to cover the newly emerging subfields
and to teach the range of interdisciplinary tools and knowledge so
fundamental to new theoretical approaches. The 1987 Institute at
Stanford differs from the 1986 Institute primarily in specific course
offerings and faculty and in its focus on providing a rich
interdisciplinary research as well as teaching environment. Many of
the instructors will also be participating in research groups; in
general they will teach only one course.
The Executive planning committee is: Ivan Sag (Director), Ellen
Prince (Associate Director), Marlys Macken, Peter Sells, and Elizabeth
Traugott. David Perlmutter will be the Sapir Professor, and Joseph
Greenberg the Collitz Professor of the 1987 Institute.
For more information, write 1987 LSA Institute, Department of
Linguistics, Stanford University, Stanford, California 94305.
Preliminary List of Institute Faculty:
Judith Aissen
Elaine Anderson
Stephen Anderson
Philip Baldi
Jon Barwise
Joan Bresnan
Gennaro Chierchia
Kenneth Church
Eve Clark
Herbert Clark
Nick Clements
Charles Clifton
Philip Cohen
Robin Cooper
William Croft
Penelope Eckert
Elisabet Engdahl
Charles Ferguson
Charles Fillmore
Joshua Fishman
Lyn Frazier
Victoria Fromkin
J. Mark Gawron
Gerald Gazdar
Joseph Greenberg
Barbara Grosz
Jorge Hankamer
Jerry Hobbs
Paul Hopper
Larry Horn
Philip Johnson-Laird
Ron Kaplan
Lauri Karttunen
Martin Kay
Paul Kay
Paul Kiparsky
William Ladusaw
William Leben
Steve Levinson
Mark Liberman
Marlys Macken
William Marslen-Wilson
John McCarthy
Nils Nilsson
Barbara Partee
Fernando Pereira
David Perlmutter
Ray Perrault
Stanley Peters
Carl Pollard
William Poser
Ellen Prince
Geoffrey Pullum
John Rickford
Luigi Rizzi
Ivan Sag
Deborah Schiffrin
Peter Sells
Stuart Shieber
Candace Sidner
Brian Smith
Donca Steriade
Susan Stucky
Michael Tanenhaus
Elizabeth Traugott
Peter Trudgill
Lorraine Tyler
Thomas Wasow
Terry Winograd
Annie Zaenen
Arnold Zwicky
------------------------------
End of AIList Digest
********************
∂10-Mar-86 1800 LAWS@SRI-AI.ARPA AIList Digest V4 #48
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Mar 86 18:00:30 PST
Date: Mon 10 Mar 1986 12:57-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #48
To: AIList@SRI-AI
AIList Digest Tuesday, 11 Mar 1986 Volume 4 : Issue 48
Today's Topics:
Bibliographies - AI Applications & Robotics and Manufacturing Automation
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Bibliography - AI Applications
definitions
D BOOK22 Applications of Artificial Intelligence\
%I Society of Photo-Optical Instrumentation Engineers\
%D 1-3 April 1986\
%N 635\
%C Orlando
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
%A J. F. Gilmore
%A K. Pulaski
%T Comparative Analysis of Expert System Tools
%B BOOK22
%K AI01 T03
%A Y.-c. You
%T Expert System for Model Management
%B BOOK22
%K AI01
%A J. R. Slagle
%T Expert System for Treadmill Exercise ECG Test Anlaysis
%B BOOK22
%K AI01 AA01
%A D. L. Tobat
%A S. K. Rogers
%A S. E. Cross
%T SENTiNEL: An Expert System Decision Aid for a Command, Control and
Communication Operator
%B BOOK22
%K AI01 AA18
%A M. F. Doherty
%A C. M. Bjorklund
%A R. H. Laprade
%A M. T. Noga
%A C. Y Yang
%T Improved Cartographic Classifications via Expert Systems
%B BOOK22
%K AI01
%A D. Ho
%A K. Pulaski
%T GEST: Generic Expert System Tool
%B BOOK22
%K AI01 T03
%A G. A. Roberts
%T Expert System for Labeling Segments in FLIR Imagery
%B BOOK22
%K AI01 AI06
%A RM. Ali
%A D. A. Scharnhortst
%A C.-S. AI
%A H. J. Ferber
%T Forward Chaining Versus a Graph Approach as the Inference Engine in
Expert Systems
%B BOOK22
%K AI01
%A A. Bravos
%T Application of the CSAL Language to the Design of Diagnostic Expert
Systems: the MOODIS (mood disorder) Experience
%B BOOK22
%K AI01 AA01 psychology
%A D. D. Dankel II
%A R. V. Rodriguez
%A F. D. Anger
%T HAIM OMLET: Expert System Research Tool for Discrete Structures
%B BOOK22
%K AI01
%A A. P. Levine
%T ESP: Expert System for Computer Performance Management
%B BOOK22
%K AI01 AA08
%A J. C. Esteva
%A R. G. Reynolds
%T Real-Time Knowledge Base Deviation Diagnostic Expert Systems
%B BOOK22
%K AI01
%A G. Drastal
%A T. DuBois
%A L. McAndrews
%A N. Straguzzi
%A S. Raatz
%T Economy in Expert System Development: Aegis Combat System Maintenance Advisor
%B BOOK22
%K AI01 AA18 O02
%A B. Korel
%T Program Error Localization Expert System
%B BOOK22
%K AI01 AA08
%A G. Y. Tang
%T Expert System Makes Image Processing Easier
%B BOOK22
%K AI01 AI06
%A R. K. Eisley
%A M. S. Lan
%T Expert Measurement System for Ultrasonically Characterizing
Material Properties
%B BOOK22
%K AI01 AA05
%A J. Aloimonos
%A A. Basu
%T Shape and Motion From Contour Without Correspondece
%B BOOK22
%K AI06
%A A. Stevenson
%A M. Fox
%A M. Rabin
%T TESS: Tactical Expert System
%B BOOK22
%K AI01
%A M. V. Orman
%T Modified Hough Transform for Finding Lines in an Edge Map
%B BOOK22
%K AI06
%A G. Bilbro
%A W. Snyder
%T System to Recognize Objects in 3D Images
%B BOOK22
%K AI06
%A T. C. Rearick
%T Real-time Image Understanding
%B BOOK22
%K AI06
%A A. M. Darwish
%A A. K. Jain
%T Rule Based System for Automated Industrial Inspection
%B BOOK22
%K AI06 AI01
%A N. C. Griswold
%A C. P. Jeh
%T Stereo Model Based on Mechanisms of Human Binocular Vision
%B BOOK22
%K AI06 AI08
%A R. S. Loe
%A T. J. Laffey
%T Measurement of the 3D Radius of Curvature Using the Facet Approach
%B BOOK22
%K AI06
%A D. K. Walters
%T Object Interpretation Using Boundary-Based Perceptually Valid Features
%B BOOK22
%K AI06
%A S. Tynor
%A C. C. Tsang
%A K. Gingher
%T VEST: Visual Expert System Testbed
%B BOOK22
%K AI06 AI01
%A J. Aloimonos
%A A. Bandyopadhyay
%T Perception of 3D Motion Without Correspondence
%B BOOK22
%K AI06
%A J. Merchant
%A T. J. Boyd
%T Flexible Template Matching for Autonomous Classification
%B BOOK22
%K AI06
%A J. H. Nurre
%A E. L. Hall
%T Error Analysis for a Two-Camera Stereo Vision System
%B BOOK22
%K AI06
%A Y. J. Tejwani
%T Logical Basis in the Layered Computer Vision Systems Model
%B BOOK22
%K AI06
%A G. G. Pieroni
%A O. G. Johnson
%T Computer Vison System for Understanding the Movement of a Wave Field
%B BOOK22
%K AI06
%A R. Y. Li
%T Hough Tansform Approach for Cylinder Detection in Range Image
%B BOOK22
%K AI06
%A A. Semeco
%A B. Williams
%A S. Roth
%T GENSCHED: Real-World Hierarchical Planning System
%B BOOK22
$K AI09
%A R. W. McLaren
%A H.-Y. LIn
%T Knowledge-Based Approach to Ship Identification
%B BOOK22
%K AA18
%A M. Ragheb
%A D. Gvillo
%T Development of Knowledge-Based Fault Identification Systems on
Microcomputers
%B BOOK22
%K AA21
%A E. R. Addison
%T Design Issues for a Knowledge-Based Controller for a Track-While-Scan
Radar System
%B BOOK22
%K AA19
%A Z. Zhang
%A M. Simaan
%t Rule Based Supported Interpretation of Signal Images
%B BOOK22
%K AI06 AI01
%A C. L. Huang
%A J. T. Tou
%T Knwoledge-Based Functional Symbol Understanding in Electronic
Circuit Diagram Interpretation
%B BOOK22
%K AA04
%A P. E. Green
%T Resource Limitation Issues in Real-Time Intelligent Systems
%B BOOK22
%K O03
%A K. S. Gill
%T Knowledge Based System for Education and Training
%B BOOK22
%K AA07
%A S. Tulpule
%A C. Knapp
%T Classification of Textured Surfaces Based on Reflection Data
%B BOOK22
%K AI06
%A T. Y. Young
%A S. Gunasekaran
%T Three-Dimensional Motion Analysis Using Shape Change Information
%B BOOK22
%K AI06
%A A. Izaguirre
%A J. Summers
%T Analytical Identification of the Calibration Matrices Using the
Two Plane Model
%B BOOK22
%K AI06
%A M. Celenk
%T Gross Segmentation of Color Images of Natural Scenes for Computer
Vision Systems
%B BOOK22
%K AI06
%A A. Strange
%A W. A. Fraser
%A G. A. Crockett
%T Investigation of Geometric Features
%B BOOK22
%K AI06
%A T. MIltonberger
%A H. Muller
%T True 2D Edge Detector
%B BOOK22
%K AI06
%A P. Bashir
%T Textured Image Segmentation
%B BOOK22
%K AI06
%A F. S. Cohen
%A Z. Fan
%T Segmentation and Global Parameter Estimation of Textured Images
Modelled by Unknown Gaussian Markov Random Fields
%B BOOK22
%K AI06
%A M. Ragheb
%A D. Gvillo
%T Heuristic Simulation of Engineering Systems on A Supercomputer
%B BOOK22
%K H04
%A R. E. Neapoliton
%T Models for Reasoning Under Uncertainty
%B BOOK22
%K O04
%A Y. Cheng
%A R. L. Kashyap
%T Study of the Different Methods for Combining Evidence
%B BOOK22
%K O04
%A Y. J. Tejwani
%T Decision Support for Fuzzy Processes: A Prolog Assistant
%B BOOK22
%K O04 T02 AI13
%A H. Nordin
%T Using Typical Cases for Knowledge Based Consultation and
Teaching
%B BOOK22
%K AA07
%A H. Krishnamurthy
%T Conceptual Clustering Scheme for Frame-Based Knowledge Organization
%B BOOK22
%K AI04
%A L. M. Fu
%T Utility Measurement of a Decison Rule with Uncertaintly
%B BOOK22
%K O04 AI13 AI01
%A B. J. Garner
%A E. Tsui
%T Extendable Graph Processor for Knowledge Engineering
%B BOOK22
%A D. Gillies
%A A. Howson
%T Caused Based Methods of Knowledge Representation and Its Application to
Lift Scheduling
%B BOOK22
%A K. Y. Huang
%A K. S. Fu
%A Z. S. Lin
%T Automatic Linking Processing of Seismogram Using Branch and Bound
%B BOOK22
%K AA03 AI03
%A P. L. Love
%T Automatic Recognition of Primitive Changes in Manufacturing
Process Signals
%B BOOK22
%K AA05 AI06
%A R. Yoshii
%T Robust Machine Translation System
%B BOOK22
%K AI02
%A T. LI
%A L. Y. Fang
%T Computer Assisted Two-Way Diagnosis in Traditional Chinese Medicine
%B BOOK22
%K AA01
%A J. J. Cannat
%A Y. Kodratoff
%T Machine Learning and Recognition of Multifont Printed Characters
%B BOOK22
%K AI06
%A M. Nakashima
%A T. Koezuka
%A N. Horaoka
%A T. Inagaki
%T Automatic Pattern Recognition with Self-Learning Algorithm Based
on Featured Template Matching
%B BOOK22
%K AI04 AI06
%A L. Lafferty
%A D. Bridgeland
%T Scavenger: an Experimental Rete Compiler
%B BOOK22
%K AI01
%A A. Bandopadhay
%A D. H. Ballard
%T Visual Navigation by Tracking of Environmental Points
%B BOOK22
%K AI07 AI06
%A M. Herman
%T Fast Path Planning in Unstructured, Dynamic 3D Worlds
%B BOOK22
%K AI07
%A R. W. Harrigan
%T Sensor-Driven Robot Systems Testbed
%B BOOK22
%K AI06 AI07
%A P. G. Selfridge
%T Automatic 3D Reconstruction from Serial Section Electron Micrographs
%B BOOK22
%K AI06
%A F. B. Hoogterp
%A S. A. Caito
%T Knowledge Acquisition for Autonomous Navigation
%B BOOK22
%K AI06 AA18 AA19
%A T. Unti
%A C. C. Tsai
%T Optical System Alignment Using Robotics
%B BOOK22
%K AI07
%A C. Isik
%A A. Meystel
%T Structure of a Fuzzy Production System for Autonomous Robot
Control
%B BOOK22
%K AI06 AI01 O04
%A K. Bae
%T Determination of the Most Probable Point from Nonconcurrent Lines
%B BOOK22
%A B. W. Suter
%A K. D. Reilly
%T Integrated VLSI Design Environment
%B BOOK22
%K AA04
%A D. K. Fronek
%T Real-Time Computer Vision Intelligent Hardware
%B BOOK22
%K AI06 O03
%A W. J. McClay
%A P. J. MacVicar-Whelan
%T AI-Based Process Implementation
%B BOOK22
%K AA05
%A D. T. Politis
%A W. H. Licata
%T Adaptive Decoder for an Adaptive Learning Controller
%B BOOK22
%K AI04
%A M. Adjouadi
%T Discrimination of Upright Objects from Flat-Lying Objects in
Automated Guidance of Roving Robots
%B BOOK22
%K AI07
%A B. G. Gayle
%A D. Dankel
%T RxPERT: Intelligent Computer System for Drug Interactions
%B BOOK22
%K AA01
%A J. Hong
%T Extension Matrix Approach to the General Covering Problem
%B BOOK22
%A J. Dwyer
%T Transitive Model for AI Applications
%B BOOK22
%A E. T. Whitaker
%A M. N. Huhns
%T Rule-based Geometrical Reasoning for the Interpretation of Line Drawings
%B BOOK22
%K AA04 AI01 AI06
%A W. P. C. HO
%T Intelligent Computer-Aided Design by Modeling Chip Layout as a
Metaplanning Problem
%B BOOK22
%K AA04 AI09
%A D. R. Wheeler
%T Forecasting Artificial Intelligence Demand
%B BOOK22
%K AT04
%A M. Mathews
%A C. Poinsette
%T Intelligent Tutor for Elementary Spanish
%B BOOK22
%K AI02 AA07
%A C. Y. Sheu
%T Well Performed Systems
%B BOOK22
%A A. Imamiya
%A A. Kondoh
%T Embedding an Explanation System within a User Interface
%B BOOK22
%K O01 AA15 AI02
%A E. P. L. Passos
%T Prolog's Start Out in Brazil
%B BOOK22
%K T02
%A A. Hall
%T Use of Prolog in Automatic Speech Recognition
%B BOOK22
%K T02 AI05
%A B. Unger
%A S. Siegel
%T Modular Hardware which Allows Flexible Implementation of Combinations
of Vison Processing Approaches
%B BOOK22
%K AI06
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Bibliography - Robotics and Manufacturing Automation
definitions
D BOOK21 Robotics and Manufacturing Automation\
%I American Society of Mechanical Engineers\
%E M. Donath\
%E M. Leu\
%D 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
%A R. M. Goor
%T A New Approach to Minimum-time Robot Control
%B BOOK21
%K AI07
%A T. Watanabe
%A M. Kametani
%A K. Kawata
%A K. Tetsuya
%T Improvement of the Computing Time of Robot Manipulators Using a
Multi-microprocessor
%B BOOK21
%K AI07 H03
%A T. Yabuta
%A T. Tsujimura
%A T. Morimitsu
%T A Manipulator Control Method Using a Shape Recognition System with an
Ultrasonic Distance Sensor
%B BOOK21
%K AI07 AI06
%A Y. Stepanenko
%T On Modal Control of Robotic Manipulators
%B BOOK21
%K AI07
%A J. Y. S. Luh
%A Y. L. Gu
%T Efficiency and Flexibility of Industrial Robots with Redundancy
%B BOOK21
%K AI07
%A G. M. Chaoui
%A W. J. Palm
%T Active Compliance Control Strategies for Robotic Assembly Applications
%B BOOK21
%K AI07
%A F. W. Paul
%A J. K. Parker
%T Active Industrial Robot End-effector Control Design Strategy for
Manufacturing Applications
%B BOOK21
%K AI07
%A J. K. Parker
%A F. W. Paul
%T Impact Force Control in Robot Hand Design
%B BOOK21
%K AI07
%A R. Vossoughi
%A M. Donath
%T Robot Hand Impedance Control in the Presence of Mechanical Nonlinearities
%B BOOK21
%K AI07
%A H. Asada
%A N. Goldfine
%T Process Analysis and Compliance Design for Grinding with Robots
%B BOOK21
%K AI07
%A D. Brock
%A S. Chiu
%T Environment Perception of an Articulated Robot Hand Using Contact Sensors
%B BOOK21
%K AI07
%A W. J. Book
%A S. L. Dickerson
%A G. Hastings
%A S. Cetinkunt
%A T. Alberts
%T Combined Approaches to Lightweight Arm Utilization
%B BOOK21
%K AI07
%A R. P. Singh
%A P. W. Likins
%A R. J. VanderVoort
%T Automated Dynamics and Control Analysis of Constrained Multibody System
%B BOOK21
%K AI07
%A D. R. Meldrum
%A M. J. Balas
%T Direct Adaptive Control of a Flexible Remote Manipulator Arm
%B BOOK21
%K AI07
%A D. A. Streit
%A C. M. Krousgrill
%A A. K. Bajaj
%T Dynamic Stability of Flexible Manipulators Performing Repetitive Tasks
%B BOOK21
%K AI07
%A M. C. Leu
%A V. Kukovski
%A K. K. Wang
%T An Analytical and Experimental Study of the Stiffness of Robot Manipulators
with Parallel Mechanisms
%B BOOK21
%K AI07
%A K. Youcef-Toumi
%A H. Asada
%T The Design of Arm Linkages with Decoupled and Configuration-Invariant
Inertia Tensors: Part I: Open Kinematic Chains with Serial Drive Mechanisms
%B BOOK21
%K AI07
%A K. Youcef-Toumi
%A H. Asada
%T The Design of Arm Linkages with Decoupled and Configuration-Invariant
Inertia Tensors: Part II: Actuator Relocation and Mass Redistribution
%B BOOK21
%K AI07
%A E. Vaaler
%A W. P. Seering
%T Design of a Cartesian Robot
%B BOOK21
%K AI07
%A O. Khatib
%A J. Burdick
%T Dynamic Optimization in Manipulator Design: The Operational Space
Formulation
%B BOOK21
%K AI07
%A H. West
%A H. Asada
%T Kinematic Analysis and Mechanical Advantage of Manipulators Constrained
by Contact with the Environment
%B BOOK21
%K AI07
%A J. M. Hollerbach
%T Evaluation of Redundant Manipulators Derived from the PUMA Geometry
%B BOOK21
%K AI07
%A Y.. Nakamura
%A H. Hanafusa
%T Inverse Kinematic Solutions with Singularity Robustness for Robot
Manipulator Control
%B BOOK21
%K AI07
%A J. A. Apkarian
%A A. A. Goldenberg
%A H. W. Smith
%T An Approach to Kinematics Control of Robot Manipulator
%B BOOK21
%K AI07
%A T. J. Fougere
%A S. D. Chawla
%A J. J. Kanerva
%T Robot-Sim: A CAD-based Workcell Design and Off-line Programming System
%B BOOK21
%K AI07
%A C. Goad
%T Robot and Vision Programming in Robocam
%B BOOK21
%K AI07 AI06
%A R. Jayaraman
%T GALOP/2D: A Graphical System for Workcell Layout Evaluation
%B BOOK21
%K AI07 AA05
%A D. Bailey
%A S. Derby
%A M. Steiner
%T Computer-integrated System for Design and Assembly of Cable Harnesses:
Part I: Design and Applications
%B BOOK21
%K AI07 AA05
%A D. Bailey
%A S. Derby
%A M. Steiner
%T Computer-integrated System for Design and Assembly of Cable Harnesses:
Part II: Algorithms
%B BOOK21
%K AI07 AA05
%A M. C. Weinstein
%A M. C. Leu
%A F. A. Infelise
%T Design and Analysis of Robotic Assembly for a Printer Compensation Arm
%B BOOK21
%K AI07
%A H. Asada
%A A. Fields
%T Design of Flexible Fixtures Reconfigured by Robot Manipulators
%B BOOK21
%K AI07
%A B. O. Wood
%A P. H. Cohjen
%A D. J. Medeiros
%A J. L. Goodrich
%T Design for Robotic Assembly
%B BOOK21
%K AI07
%A K. Nishimura
%A M. Nakaga
%A H. Kawasaki
%T Mechanism and Control of a Page-Turning Robot
%B BOOK21
%K AI07
%A H. Asada
%A S. K. Lim
%T Design of Joint Torque Sensors and Torque Feedback Control for Direct-Drive
Arms
%B BOOK21
%K AI07
%A J. Pawletko
%A D. Manzer
%A J. Ish-Shalom
%T A Direct-Drive Actuator for Cartesian Robots
%B BOOK21
%K AI07
%A R. L. Hollis
%T Design for a Planar XY Robotic Fine-Positioning Device
%B BOOK21
%K AI07
------------------------------
End of AIList Digest
********************
∂10-Mar-86 2039 LAWS@SRI-AI.ARPA AIList Digest V4 #49
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Mar 86 20:39:27 PST
Date: Mon 10 Mar 1986 13:04-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #49
To: AIList@SRI-AI
AIList Digest Tuesday, 11 Mar 1986 Volume 4 : Issue 49
Today's Topics:
Bibliography - Msc. AI
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Bibliography - Msc. AI
definitions
D BOOK19 Applications of Knowledge-Based Systems to Engineering Analysis
and Design\
%I American Society of Mechanical Engineers\
%E C. L. Dym\
%D 1986
D BOOK20 Computer-Aided/Intelligent Process Planning\
%I American Society of Mechanical Engineers\
%E C. R. Liu\
%E T. C. Cahng\
%E R. Komanduri\
%D 1986
D MAG9a IEEE Journal of Robotics and Automation\
%V RA-1\
%N 4\
%D DEC 1985
D BOOK23 Hybrid Image Processing\
%I Society of Photo-Optical Instrumentation Engineers\
%D 1-2 April 1986\
%N 638\
%C Orlando
D MAG9 Bulletin of the Japan Society of Mechanical Engineers\
%V 29\
%N 247\
%D 1986
D MAG10 Industrial and Process Control Magazine\
%V 59\
%N 1\
%D January 1986
D MAG11 Information and Control\
%V 65\
%N 2-3\
%D MAY-JUN 1985
D MAG12 Journal of the ACM\
%V 33\
%N 1\
%D JAN 1986
D MAG13 Robotics\
%V 1\
%N 1\
%D MAY 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
%A W. Mark
%T Knowledge Based Interface Design
%B User Centered System Design
%E Donald A. Norman
%E Stephen W. Draper
%I Lawrence A. Erlbaum and Associates
%D 1986
%A S. J. Fenves
%T A Framework for a Knowledge-based Finite Element Analysis Assistant
%B BOOK19
%K AA05
%A F. S. Chehayeb
%A J. J. Connor
%A J. H. Slater
%T An Environment for Building Engineering Knowledge-based Systems
%B BOOK19
%K AA05
%A J. R. Dixon
%A E. C. Libardi, Jr.
%A S. C. Luby
%A M. Vaghul
%A M. K. Simmons
%T Expert Systems for Mechanical Design: Examples of Symbolic Representations of
Design Geometries
%B BOOK19
%K AA05
%A R. E. Levitt
%A J. C. Kunz
%T A Knowledge-based System for Updating Engineering Project Schedules
%B BOOK19
%K AA05
%A J. R. Zumsteg
%A D. L. Flags
%T Knowledge-based Analysis and Design Systems for Aerospace Structures
%B BOOK19
%K AA05
%A V. E. Hampel
%A B. Garner
%A J. R. Matthews
%T Intelligent Gateway Processors as Integrators of CAD/CAM Networks
%B BOOK19
%K AA05
%A S. Mittal
%A C. L. Dym
%A M. Morjaria
%T PRIDE: An Expert System for the Design of Paper-Handling Systems
%B BOOK19
%K AA05 AI01
%A D. R. Rehak
%T SITECHAR: An Expert System Component of a Geotechnical Side Characterization
Workbench
%B BOOK19
%K AA05
%A D. Pecora
%A J. R. Zumsteg
%A F. W. Crossman
%T An Application of Expert Systems to Composite Structural Design and Analysis
%B BOOK19
%A G. Eshel
%A M. Barash
%A T. C. Cahng
%T A Rule-based System for Automatic Generation of Deep-Drawing Process Outlines
%B BOOK20
%K AA05
%A Y. Lagoude
%A J. P. Tsang
%T A Plan Representation Structure for Expert Planning Systmes
%B BOOK20
%K AA05 AI01
%A R. H. Phillips
%A V. Arunthavanathan
%A X. D. Zhou
%T Symbolic Representation of CAD Data for Artificial Intelligence-based
Process Planning
%B BOOK20
%K AA05
%A V. R. Milacic
%T SAPT-Expert System for Manufacturing Processing Planning
%B BOOK20
%K AA05 AI01
%A W. Eversheim
%A J. Schultz
%T Strategies of Process Selection for Different Applications of Computer-aided
Process Planning
%B BOOK20
%K AA05
%A D. S. Nau
%A T. C. Chang
%T A Knowledge-Based Approach to Generative Process Planiing
%B BOOK20
%K AA05
%A P. M. Ferreira
%A B. Kochar
%A C. R. Liu
%A V. Chandru
%T AIFIX: An Expert System Approach to Fixture Design
%B BOOK20
%K AA05 AI01
%A E. T. Sanii
%A J. I. ElGomayel
%T Classification and Coding of Cutting Tools
%B BOOK20
%K AA05
%A H. J. Steudel
%A G. V. Tollers
%T A Decision-Table--based Guide for Evaluating Computer-Aided Processing
Planning Systems
%B BOOK20
%K AA05
%A K. Iwata
%A N. Sugimura
%T An Integrated CAD/CAPP System with Know-Hows on Machining Accuracies
of Parts
%B BOOK20
%K AA05
%A G. Eshel
%A M. Barash
%A K. S. Fu
%T Generating the Inclusive Test Rule in a Rule-based System for Process
Planning
%B BOOK20
%K AA05
%A Y. C. HO
%A X. R. Cao
%T Performance Sensitivity to Routing Changes in Queuing Networks and
Flexible Manufacturing Syustems Using Perturbation Analysis
%J MAG9
%P 165-172
%K AA05
%A R. Nigam
%A C. S. G. Lee
%T A Multiprocessor-Based Controller for the Control of Mechanical Manipulators
%J MAG9a
%P 173-182
%K AI07
%A M. Kaneko
%A M. Abe
%A K. Tanie
%T A Hexapod Walking Machine with Decoupled Freedoms
%J MAG9a
%P 183-190
%K AI07
%A M. K. Brown
%T Feature Extraction Techinques for Recognizing Solid Objects with an
Ultrasonic Range Sensor
%J MAG9a
%P 191-205
%K AI06
%A W. Holzmann
%A J. M. McCarthy
%T Computing the Friction Forces Associated with a Three-Fingered Grasp
%J MAG9a
%P 206-210
%K AI07
%A J. M. Abel
%A W. Holzmann
%A J. M. McCarthy
%T On Grasping Objects with Two Articulated Fingers
%J MAG9a
%P 211-214
%K AI07
%A H. W. Mergler
%T Review of Introduction to Robotics, by A. J. Critchlow
%J MAG9a
%P 215
%K AI07
%A A. L. Pai
%T Review of Recent Advances in Robotics, edited by G. Beni and S. Hackwood
%J MAG9a
%P 215
%K AI07
%A K. G. Lieb
%A J. C. Mendelsohn
%T Robotic Vision Tray Picking System Design Using Multiple Optical
Matched Filters
%B BOOK23
%K AI06 AI07
%A J. C. Mendelsohn
%A D. C. Englund
%T Multiple Optical Filter Design, Simulation Results
%B BOOK23
%K AI06 AI07
%A F. T. S. Yu
%A M. F. Cao
%T Automatic Real-Time Optical Pattern Recognition Processing System
%B BOOK23
%K O03 AI06
%A R. Juday
%T Optical Correlator Use at Johnson Space Center
%B BOOK23
%K AI06
%A G. Eichman
%A T. Kasparis
%T Texture Classification Using the Hough Transform
%B BOOK23
%K AI06
%A D. Casasent
%A S. Liebowitz
%T Hierarchical M-DOF Optical Artificial Intelligence Correlation Processor
%B BOOK23
%K AI06
%A G. Eichmann
%A M. Jalowsky
%T Shape Description Using an Associative Memory
%B BOOK23
%K AI06
%A B. Montgomery
%A B. V. K. Vijaya Kumar
%T Nearest Neighbor Non-iterative Error-correcting Optical Associative
Processor
%B BOOK23
%K AI06
%A D. A. Jared
%A D. J. Ennis
%T Learned Distortion Invariant Pattern Recognition Using SDFs
%B BOOK23
%K AI06
%A D. W. Sweeney
%A G. F. Schlis
%T Iteratively Designed 3D Optical Correlation Filters for Distortion Invariant
Recogniton
%B BOOK23
%K AI06
%A C. L. Tan
%A W. N. Martin
%T Hierarchical Structures, Parallelism, and Planning in Analyzing Time
Varying Images
%B BOOK23
%K AI06 AI09
%A A. A. Tvirbutas
%A C. A. McPherson
%A B. E. Hines
%T Characteristics and Limitations of Image Acquisition Systems
%B BOOK23
%K AI06
%A K. Morita
%A K. Asai
%T Fingerprint Identification Terminal for Personal Identification
%B BOOK23
%K AI06
%A V. E. Diehl
%T Use of Complementary Analog and Digital Processing in the
Removal of Local Background in Low Contrast Images
%B BOOK23
%K AI06
%A A. Oosterlinck
%T Comparison of Optical and Digital Image Processing
Techniques in Visual Inspection and Robotic Vison
%B BOOK23
%K AI07 AI06
%A M. S. Schmaltz
%A F. Caimi
%T Shift-Invariant Recognition of Deformed Ship Silhouettes at
Multiple Resolution Scales
%B BOOK23
%K AI07 AI06
%A Masaki Yokoyama
%A Hirohiko Shibuya
%A Rae-Kyung Park
%T A Basic Study of the Automated Generation of Machine Structures
(1st Report, Graphical Description of the Functional Structure of Machines)
%J MAG9
%P 295-300
%K AA05
%A Ikuo Ito
%a Takao Onozawa
%T An Intelligent Aspect of CAD for Mechanical Design
(The Conceptual Design of a Simple Object)
%J MAG9
%P 301
%K AA05
%A Lowell Hawkinson
%T LISP and LISP Machines: Tools for AI Programming
%J MAG10
%P 37
%K T01 H02
%A Rich Merritt
%T Artificial Intelligence Tackles Industrial Tasks
%J MAG10
%P 41
%A John Grant
%A Jack MInker
%T Normalization and Axiomatization for Numerical Dependencies
%J Information and Control
%V 65
%N 1
%D APR 1985
%P 1-17
%A R. Statman
%T Logical Relations and the Typed Lambda-Calculus
%J MAG11
%P 85-97
%K AI14
%A A. J. Kfoury
%T Definability by Deterministic and Non-deterministic Programs
(with Applications to First Order Dynamic Logic)
%J MAG11
%P 98-121
%K AI11 AA08 AI14
%A Nachum Dershowitz
%T Computing with Rewrite Rule Systems
%J MAG11
%P 122-157
%K AI11 AI10 AI14
%A David A. Plaisted
%T Semantic Confluence Tests and Completion Methods
%J MAG11
%P 182
%K AI11 AI10 AI14
%A K. Melhorn
%A P. Preparata
%T Routing Through a Rectangle
%J MAG12
%P 60-86
%K AA04
%A Zohar Manna
%A Richard Waldinger
%T Special Relations in Automated Deduction
%J MAG12
%P 1-59
%K AI14
%A C. S. G. Lee
%A R. C. Gonzales
%A K. S. Fu
%T Tutorial: Robotics
%I IEEE Press
%D NOV 1983
%K AT15 AI07
%X list price $39.00 member price $24.00 ISBN 0-8186-0515-4
%A Sargur N. Srihari
%T Tutorial: Computer Text Recognition and Error Correction
%I IEEE Press
%D JAN 1985
%K AI06 AT15
%X list price $36.00 member price $24.00 ISBN-0-8186-0579-0
%T Proceedings: Second Conference on Artificial Intelligence Applications
%I IEEE PRess
%D DEC 1985
%K AT15
%X list price $75.00 member price $37.50 ISBN 0-8186-06888-6
%T Proceedings: Expert Systems in Government
%I IEEE Press
%D OCT 1985
%K AT15 AI01
%X list price $70.00 member price $35.00 ISBN 0-8186-0686-X
%T Proceedings: Third Workshop on Computer Vision
%I IEEE Press
%D OCT 1985
%K AI06 AT15
%X list price $36.00 member price $18.00 ISBN-0-8186-0685-1
%T Proceedings: 1985 Symposium on Logic Programming
%I IEEE Press
%D JULY 1985
%K AI10 AT15
%X list price $44.00 member price $22.00 ISBN-0-8186-0636-3
%T Proceedings: Conference on Computer Vision & Pattern Recognition
%I IEEE Press
%D JUNE 1985
%K AI06 AT15
%X list price $66.00 member price $33.00 ISBN-0-8186-0633-9
%T Proceedings: 1985 International Conference on Robotics and Automation
%I IEEE Press
%D MAR 1985
%K AI07 AT15
%X list price $37.50 member price $18.75 ISBN-0-8186-0659-2
%T Proceedings: Workshop on the Principles of Knowledge-Based Systems
%I IEEE Press
%D DEC 1985
%K AT15
%X OUT OF PRINT
%T Proceedings: The First Conference on Artificial Intelligence
%I IEEE Press
%D DEC 1985
%K AT15
%X OUT OF PRINT
%A W. Khalil
%A J. F. Kleinfinger
%T A Working Model for the Dynamic Control of Robots (French)
%J RAIRO-AUTOMATIQUE PRODUCTIQUE INFORMATIQUE INDUSTRIELL
%D 1985
%V 19
%N 6
%P 561
%K AI07
%A R. H. Kirschbrown
%A R. C. Dorf
%T Karma--A Knowledge-Based Robot Manipulation System
%J MAG13
%P 3-12
%K AI07
%A K. G. Kempf
%T Manufacturing and Artificial Intelligence
%J MAG13
%P 13-25
%K AA05
%A R. M. Inigo
%A J. M. Angulo
%T Robotics Education in the University
%J MAG13
%P 37-47
%K AI07 AT18
%H PA
%A N. K. Gautier
%A S. S. Iyengar
%T Space and Time Efficiency of the Forest of Quadtrees Representation
%J Journal of Image and Vision Computing
%V 3
%D 1985
%P 63-70
%K AI06
%H PA
%A N. Gautier
%A S. S. Iyengar
%T Performance analysis of TID data structure
%J Proceedings of Computer Vision and Pattern Recognition
%P 416-419
%D 1985
%K AI06
%H PA
%A S. Iyengar
%A V. Raman
%T Properties of the Hybrid Quadtree
%J Proceedings of the 7th International Conference on Pattern Recognition
%D 1984
%P 292-294
%K AI06
%H PA
%A David Scott
%A S. S. Iyengar
%T A New Data Structure for Efficient Storing of Images
%J Pattern Recognition Letters
%V 3
%D 1985
%P 211-214
%K AI06
------------------------------
End of AIList Digest
********************
∂11-Mar-86 2017 LAWS@SRI-AI.ARPA AIList Digest V4 #50
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 11 Mar 86 20:16:50 PST
Date: Tue 11 Mar 1986 15:10-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #50
To: AIList@SRI-AI
AIList Digest Wednesday, 12 Mar 1986 Volume 4 : Issue 50
Today's Topics:
Queries - AI Military Successes & GNU Scheme,
Linguistics - Ambiguous Sentences & Dictionary Access,
Journal - International Journal for AI in Engineering & Prices,
Methodoloy - Turing Test & Zen
----------------------------------------------------------------------
Date: Fri, 7 Mar 86 13:47:15 EST
From: "Dr. Ron Green" (ARO) <green@BRL.ARPA>
Subject: AI Military Successes
I would like to recieve detailed information on any systems that
have been developed for the military using AI. These should not be
toy systems and they must be able to be shown to be successful.
I would prefer programs conducted for the Army but I would be interested
in discussing any service programs.
Thanks
Ron
------------------------------
Date: Fri, 7 Mar 86 08:52:43 -0100
From: dual!lll-crg!seismo!unido!gmdzi!thomas@ucbvax.berkeley.edu
(Thomas Gordon)
Subject: GNU Scheme
I'm interested in Scheme for Unix. Can you tell me how to order
GNU? Thanks for your help.
Tom Gordon
thomas@gmdzi
------------------------------
Date: Wed, 5 Mar 86 10:01:18 pst
From: sdcsvax!sdcsvax.UCSD.EDU!sdcrdcf!trwrb!trwrba!ice@ucbvax.berkeley.edu
Subject: Re: ambiguous sentences
I'm not sure that this is precisely what you are looking for,
but I remember a sentence whose meaning changes slightly when different
words are stressed:
I never said he stole that money.
I NEVER said he stole that money.
I never SAID he stole that money.
I never said HE stole that money.
I never said he STOLE that money.
I never said he stole THAT money.
I never said he stole that MONEY.
--Doug Ice.
------------------------------
Date: 06 Mar 86 18:43:18 UT (Thu)
From: "A. N. Walker" <anw%maths.nottingham.ac.uk@cs.ucl.ac.uk>
Subject: Re: ambiguous sentences
English is supposed to be right associative, so "pretty little
girls school" is (relatively) unambiguously a pretty schoolette for
girls. Similarly, "second hand book shop" should probably be as opposed
to a third automatic drug store. The other possible associations should
be obtained by hyphenation or concatenation, as "second handbook shop",
"second-hand book shop" or [the usual meaning] "secondhand-book shop".
Sadly, English has no good way of writing a third-level bracket, so
more complicated examples can be very hard to write down.
Andy Walker,
Maths Dept, Nottingham Univ., UK.
------------------------------
Date: Fri, 7 Mar 86 11:34 EST
From: ART@GODOT.THINK.COM
Subject: Ambiguous Sentences
One of my favorites, which I seem to remember first
reading in the instructions for solving the Atlantic
magazine puzzle is: "I fancy you have one." which
has more meanings when spoken than when written.
Art Medlar <art@think>
Thinking Machines Corporation
------------------------------
Date: Fri, 7 Mar 86 12:59:00 est
From: amsler@mouton.ARPA (Robert Amsler at mouton.ARPA)
Subject: Dictionary access
The latest information I have re: Wang's Lexical resources is
that they want a $10,000 one time fee plus $1,000/year per
resource. For that kind of money I thought there should be
some sort of update/maintenance, but apparently they are selling
them as is with no support and little documentation.
Houghton-Mifflin apparently also sells access to machine-readable
dictionaries and they appear to offer professional support for
updating them tied to their routine dictionary production.
If applications are academic non-profit use, the recommended
source would be the Oxford Archive in England. They distribute
several sources at the cost of making the tape copies.
Generally, the commercial sources offering dictionaries for free
have dried up. It is a business now. One might be able to strike
a deal with some publisher, but ``free'' access is becoming
increasingly rare if the intended use is commercial development.
------------------------------
Date: Wed 5 Mar 86 18:42:28-EST
From: SRIDHARAN@G.BBN.COM
Subject: Journal prices hit the moon!
In today's mail I received the announcement of a new journal called
International Journal for AI in engineering. Nice flashy brochure
and an international editorial board. I like the idea of a journal
appealing to several engineering disciplines and talking about practical
results in AI applications.
It will be published 4 times a year and the subscription is $130.
Will those taking part in new publishing ventures do something to keep
prices down?
Most of the work that goes into publishing a journal is done by the
researchers who produce the results and spend the effort in writing
a paper. The editorial board donates their time. The reviewers also
contribute their time. Why should all these folks make these contributions
so that the publishers can cream the market? It is time to take a stand.
The publishing industry is here to serve us; not to skin us.
------------------------------
Date: Fri, 7 Mar 86 20:17:28 pst
From: aurora!eugene@riacs.arpa (Eugene miya)
Subject: Re: The Turing Test - A Third Quantisation?
Turing in fact did propose that in his paper: that a machine could
try a discrimination of two players.
--eugene miya
NASA Ames Res. Ctr.
------------------------------
Date: Sat, 8 Mar 86 11:00:34 est
From: decwrl!pyramid!ut-sally!seismo!harvard!gcc-milo!zrm@ucbvax.berkelely.edu
(Zigurd R. Mednieks)
Subject: Re: Alan Watts on AI
The excerpt from Alan Watts is instructive. Like many who do not have
the patience to look into their own examples, he claims the source of
his hair is unfathomable and so the source of our thoughts is equally
out of our reach. He should speak only for himself. I know, to a
certain extent, how my hair grows.
Even worse, Watts clouds the issue. There is a valid point in that
even though I know how it is that I have hair, I can't alter the way
it grows. Similarly, even if I knew in great detail the causes of my
thoughts and ideas, I might not be able to alter their course.
Perhaps Zen just isn't relevent to AI.
-Zigurd
------------------------------
Date: Mon, 10 Mar 86 21:33:53 -0100
From: decwrl!pyramid!ut-sally!seismo!mcvax!inria!neumann@ucbvax.berkeley.edu
(Pierre Louis Neumann)
Subject: Re: Alan Watts on AI
forgive my english!
there is an intellectual knowledge (more typically western) and a corporal
one . One must "find " his proper way and place (in between) in order to
KNOW.
This place is the "dawn" or the "twilight"
------------------------------
End of AIList Digest
********************
∂12-Mar-86 1530 LAWS@SRI-AI.ARPA AIList Digest V4 #51
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Mar 86 15:30:34 PST
Date: Wed 12 Mar 1986 10:49-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #51
To: AIList@SRI-AI
AIList Digest Wednesday, 12 Mar 1986 Volume 4 : Issue 51
Today's Topics:
Query - Graphical Representation,
News - Turbo Prolog & TI Explorer, Apollo, and Sun Workstations &
AI Hardware Vendor Slugout
----------------------------------------------------------------------
Date: Mon, 10 Mar 86 13:16 EST
From: "Steven H. Gutfreund" <GUTFREUND%umass-cs.csnet@CSNET-RELAY.ARPA>
Subject: Request for information
I am looking for a reference. Is there some work that attempts to
produce a comprehensive study of graphical representation (schematics)
that are used by professionals. Examples would be architects, systems
analysts, industrial designers, and logistic planners. There are,
of course, civil engineers who actually go and construct scale models
of things like dams, etc, and conduct their analysis on them. But I
am looking for people who use 2-d and multidimensional paper schematics
for their analyses. Especially interesting are schematics which are not
just passive, but allow the user to carry out graphical analysis on
that chart. Something on the order of a fileVision, except that fileVision
only does data queries.
- Steven Gutfreund
gutfreund@umass-cs.csnet
[I doubt that there is a comprehensive survey, but there are some
partial ones. Woodworth's >>Graphical Simulation<< has a large
section on algebraic geometry, graphical methods for solving
differential equations, etc. I have seen books on nomograms and
a recent book (by James Martin?) on the flowcharts and other diagrams
used by programmers. Control theorists (but not the theoretical
ones!) use pole-zero charts and other graphical aids. Statisticians
use X-Bar/R charts to track quality control, Roman/Latin/etc. squares
to plan experiments, and occassionally dependency graphs to model
causal or correlational linkages. Logicians and circuit designers
use Venn diagrams and Karnaugh maps. There are books on visual thinking
and on graphs and other displays for information transfer. Two recent
books are >>The Elements of Graphing<< by William S. Cleveland and
>>The Visual Display of Quantitative Information<< by Edward R. Tufte.
Does anyone know of other particularly good surveys? -- KIL]
------------------------------
Date: Wed, 12 Mar 86 01:32 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: turbo prolog (again)
Ken and Chuck,
I sent the following message about a newly announced prolog compiler
which did not show up in either mailing list.
From: Tim Finin <Tim@UPenn> on Thu 6 Mar 1986 at 15:51, 13 lines
To: AIlist@sri-ai, PROLOG-REQUEST%su-score.arpa@CSNET-RELAY
Subj: Turbo Prolog
Date: Thu, 6 Mar 86 15:51 EST
Someone gave me a copy of a news item from Electronic Engineering Times of
March 3rd which describes a Prolog compiler for PCs that Borland Int.
(of Turbo Pascal fame) is releasing on April 15th. According to the note,
the price will be $99. Borland claims that it was clocked at 100K lips on
an IBM-PC and 300K lips on an AT! (The benchmark used was described as "a
single rule benchmark"). The dialect is described as "a superset of
Clocksin and Mellish".
The system appears to include an incremental compiler, screen editor,
support for windowing, a module capability, sound primitives and color
graphics primitives.
I assume you both thought it was too much of a plug for a new compiler with
little real significance. I disagree! It is significant for one of two
reasons, as I explain below. Note first that:
1 - Borland is a respected company making software for micros.
Their products, especailly Turbo Pascal, are quite good, widely
used and very cheap. I've seen it claimed that over 500,000
copies of Turbo Pascal have been sold!
2 - Their prolog compiler seems to be reasonable from the point of
view of features.
3 - It's claimed to provide a ORDER OF MAGNITUDE improvement on
performance. The other PC based prolog compiler claim to run
on the order of 10K to 20K Lisp, I think.
4 - They are claiming to sell it at an ORDER OF MAGNITUDE less price
than the other prolog compilers for PCs.
Now - the reasons: either (1) Borland has discovered some very clever tricks
to producing much better compiled code from standard prolog or (2) they are
not playing the benchmarking game fairly. I tend to lean toward (2) but
hope that there may be a fair amount of (1) involved as well. If Turbo
Pascal weren't such a win, I'd have little hope. On the pessimistic side,
Robert Rubinoff sent me the following back-of-the-envelope analysis:
From: Robert Rubinoff <Rubinoff@UPenn> on Fri 7 Mar 1986 at 10:28,
To: Tim Finin <Tim@UPenn>
Subj: Turbo Prolog
100 Klips = .1MHZ. Now assuming that they are only using code within
one segment (which limits you to 64K), the 8088 takes about 3 cycles
for the average register instruction, and about 10-15 cycles + memory
fetch time for a memory instruction. Memory fetches take a few cycles;
I can't find where it says how much; so let's say that it's just enough
to push the average instruction time up to 15 cycles. If 2 out of 3
instructions are register instructions, we get an average of 21/3 or 7
cycles per instruction. (I think my calculations here are probably a
little low).
So if we have a 4MHz 8088, we get an instruction rate of 0.5MHz, or 5
instructions per lip. On an 8MHZ 8088, we get 10 instructions per
inference. That strikes me as not enough. Maybe they're using a
benchmark that doesn't do any unification.
And all of this (at least on the 8088 in the PC, I don't know about the
AT) requires that everything be in the same segment. If you want more
than 64K, you have to go to multiple segments, which slows things down
a lot.
I'm dubious. But we'll see, I guess.
Robert
Anyway, when a respectable, established company offers a basic AI tool which
jumps TWO ORDERS OF MAGNITUDE on the price/performance scale, I think its
news! If a few months we'll either be praising the cleverness of the
Borland programmers or cursing the dishonesty of the Borland marketing
people.
Tim
[Actually, Tim's message was simply the victim of "digest delay"
and of my recent full schedule. It had come to the head of the
queue and would have been sent out today in any case. Most messages
are redistributed within a week, although humor and "special issue"
messages are sometimes saved for two weeks in order to collect a
sufficient number on the same topic. Authors of "commercial
messages" which must be rejected will receive a note from me
(unless the message has already gone out on UUCP net.ai). Tim's
message is well within the limits of acceptability (and usefulness --
thanks, Tim!). The posting which follows is more dubious, but seems
to be forwarded in a spirit of helpfulness rather than commercial PR.
A discussion has just started on WorkS, Human-Nets, and Large-List-People
that may redefine the limits of acceptability, particularly with respect
to including price information. (While price is obviously an important
spec, it has been one of the touchstones for identifying messages
with commercial intent.) -- KIL]
------------------------------
Date: Tue, 11 Mar 86 08:56 ???
From: "JERRY R. BROOKSHIRE" <BROOKSHIR%ti-eg.csnet@CSNET-RELAY.ARPA>
Subject: News Item: TI Explorer, Apollo, and Sun Workstations
The following extracts are from the Texas Instruments
internal electronic news system:
T LE;NEWS.TI.PRODUCTS.A.P01 SLE01
MON., MAR. 10, 1986 PRODUCTS AND TECHNOLOGY SECTION A
TI, APOLLO(R) PROPOSE ARTIFICIAL INTELLIGENCE ALLIANCE
AUSTIN, TEXAS - Texas Instruments and Apollo Computer Inc. today announced the
intention to enter into marketing, sales and development programs aimed at
bringing "next generation" artificial intelligence (AI) technology to the
engineering workstation market. A letter of intent signed by both companies now
lays the groundwork for the formation of a relationship that would bring TI's
leadership in AI technology to Apollo's industry-leading technical workstation.
As a first step in the proposed alliance, the companies plan to embark on a
cooperative development effort to integrate TI's Explorer(TM) LISP machine into
Apollo's DOMAIN(R) networking environment, allowing AI application developers
using Explorer to coexist on a network of Apollo workstation users. The
announcement comes shortly after Apollo's introduction of a new line of DOMAIN
workstation products.
"Apollo views AI, like graphics, as a technology that is key to a broad
range of technical application areas," said Roland Pampel, Apollo's senior vice
president of technology and marketing.
"When Apollo pioneered the workstation marketplace, the DOMAIN system's
integrated graphics capabilities provided a new dimension for application
developers," said Pampel. "We believe that AI will offer a similar leap in
application development capabilities and user productivity."
W. Joe Watson, vice president of TI's Data Systems Group, explained, "TI
has made substantial investments to build a strong AI technology base and
DSG's commercial AI products have rapidly achieved significant market success.
Teaming up with strong system vendors like Apollo will be a major step toward
expanding the use of our advanced technology in the technical computing market-
place."
Paul Armstrong, Apollo group manager of AI, said, "Many of our customers
and solution suppliers are actively seeking ways to exploit AI technology in a
variety of areas. We are pleased to work with TI in managing the transition to
a new generation of computing."
TI houses one of the largest AI research and development centers in the
world and is a leader in the internal application of AI technologies.
T LE;NEWS.TI.PRODUCTS.A.P03 SLE01
MON., MAR. 10, 1986 PRODUCTS AND TECHNOLOGY SECTION B
TI AND SUN TO LINK AI AND UNIX WORKSTATIONS
AUSTIN, TEXAS - Texas Instruments and Sun Mircosystems(R) announced today that
TI will implement Sun Microsystem's Network File System (NFS) on its
Explorer(TM) artificial intelligence (AI) workstation. The NFS implementation
will allow transparent access to files on Sun's UNIX(TM)-based workstation and
TI's LISP-based Explorer system, providing users with a development environ-
ment that includes both AI and UNIX tools on the same network.
"NFS provides a solution to customers who want to add the Explorer's symbol-
ic processing capability to a network of Sun technical workstations running
under UNIX," said DSG vice president W. Joe Watson. "The combination of these
two complementary computers on a network provides a significant new offering
to industry."
Independent of machine type and operating system, NFS increases the useful-
ness of a local area network by allowing users to easily share information
between computers from different vendors.
------------------------------
Date: 11 Mar 86 12:46 PST
From: sigart@LOGICON.ARPA
Subject: AI HARDWARE VENDOR SLUGOUT (SDSIGART & IEEE)
San Diego SIGART and San Diego IEEE Computer Society
present an
"AI HARDWARE VENDOR SLUGOUT"
ABOUT THE PROGRAM...Artificial Intelligence(AI) hardware is expensive. AI
hardware vendors are numerous and not in general substitutable. But AI
hardware must be bought to competein the growing AI/expert-systems market.
This vendor gathering will allow participating vendors to describe and
display their wares, challenge each other, and be challenged by the audience.
There will be ample time for individual discussions with vendors.
ABOUT THE PARTICIPANTS...Expected participants include Symbolics Inc.,
Lisp Machine Inc.(LMI), Texas Instruments(TI) and Apollo.
TIME/PLACE...Sunday, March 23, 2:00pm at the Mandeville Auditorium at UCSD.
(parking is free and plentiful on Sundays.)
RESERVATIONS/INFORMATION...Reservations are not required. For further
information contact Bart Kosko, (619)457-5550 or Ed Weaver (619)236-5963.
ADMISSION IS FREE.
------------------------------
End of AIList Digest
********************
∂13-Mar-86 1446 LAWS@SRI-AI.ARPA AIList Digest V4 #52
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Mar 86 14:45:14 PST
Date: Thu 13 Mar 1986 10:31-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #52
To: AIList@SRI-AI
AIList Digest Thursday, 13 Mar 1986 Volume 4 : Issue 52
Today's Topics:
Query - Satishe Thatte Net Address,
Seminars - Interpretation of Prolog Programs (Edinburgh) &
Explanation-Based Learning (CMU) &
Referential Gestures in Guugu Yimidhirr (UCB) &
Models, Metaphysics, and Empiricism (CSLI),
Conference - Expert Systems in Process Safety
----------------------------------------------------------------------
Date: Wed, 12 Mar 86 21:53:26 PST
From: Basuki Soetarman <basuki@LOCUS.UCLA.EDU>
Subject: Satishe Thatte net address ...
>
> PERSISTENT OBJECT SYSTEM FOR SYMBOLIC COMPUTERS
> Satishe Thatte
> Texas Instruments
> Thurs. Feb 27th at 4:15 pm.
> (Part of Distributed Systems Group Project meeting)
>
>The advent of automatically managed, garbage-collected virtual memory
>was crucial to the development of today's symbolic processing. No
>analogous capability has yet been developed in the domain of
>"persistent" objects managed by a file system or database. As a
>consequence, the programmer is forced to flatten rich structures of
> ...............................
This announcement was posted sometimes ago in the mod.ai. Does anybody
know the author's net address ? Any info will be appreciated.
Thanks.
basuki@locus.ucla.edu or
..!{ucbvax,cepu,trwspp,ihnp4}!ucla-cs!basuki
------------------------------
Date: Tue, 11 Mar 86 11:58:47 GMT
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@ucl-cs.arpa>
Subject: Seminar - Interpretation of Prolog Programs (Edinburgh)
EDINBURGH AI SEMINARS
Date: Wednesday, 12th March l986
Time: 2.00 p.m.
Place: Department of Artificial Intelligence
Seminar Room - F10
80 South Bridge
EDINBURGH.
Dr. C.S. Mellish, Cognitive Studies Programme, University of Sussex
will give a seminar entitled - "Interpretation of Prolog Programs".
This talk discusses work on proving properties of Prolog programs,
which has been able to derive automatically the following information:
l. Mode declarations (information about the instantiation modes in
which predicates are used).
2. Determinacy information (information about the number of solutions
that predicates can produce).
3. Information about shared structures (this can be used, for
instance, to indicate places where "occur checks" might be
desirable.
We would like to formalise our work on Prolog programs in terms of
ABSTRACT INTERPRETATIONS. The notion of using abstract
interpretations to prove properties of programs has been used
successfully with other languages (e.g. work by Cousot and Cousot,
Mycroft and Sintzoff). The basic idea is to start with a precise
description of the meaning of Prolog programs in terms of the normal
execution strategy. This description can then be given the STANDARD
INTERPRETATION, which characterises exactly what and how the program
computes but may not allow interesting properties to be proved in a
computationally feasible way. Alternatively, it can be given
consistent ABSTRACT INTERPRETATIONS, in which the program is thought of
as computing in an abstract domain where less information about the
data objects is taken account of. Results of computations in this
abstract domain then reflect properties of the program operating in the
standard way.
------------------------------
Date: 12 March 1986 1133-EST
From: Betsy Herk@A.CS.CMU.EDU
Subject: Seminar - Explanation-Based Learning (CMU)
Speaker: Gerald DeJong, University of Illinois
Date: Wednesday, April 2 (Note special day/time)
Place: 5409 Wean Hall
Time: 11:30 - 1:00
Title: Explanation Based Learning
Abstract:
The schema learning group at Illinois is exploring
artificial intelligence techniques that will enable a com-
puter system to learn general world knowledge in the form
of "schemata" through its interactions with an external
environment. A schema is a data structure that specifies,
in conceptual terms, a particular real world situation.
Schemata can be very useful in problem solving, natural
language processing and other AI areas. It is claimed, in
this paradigm, that much intelligent behavior can be cap-
tured by using a large number of such schemata.
The explanation-based method represents a departure
from the usual approaches to machine learning in several
ways. First, it is very knowledge-based. That is, the sys-
tem must possess much knowledge before it can aquire new
knowledge. Second, it is capable of one-trial learning.
The results so far are promising. Explanation-based learn-
ing takes us a large step closer to building an intelligent
system capable of learning on its own.
A number computer systems have been designed and imple-
mented based on Explanatory Schema Acquisition, an
explanation-based learning paradigm. The domain areas of
these projects include natural language processing, robot-
ics, theorem proving, physics problem-solving and theory
refinement. Several of the systems will be discussed in
the context of theoretical advantages and difficulties with
explanation-based learning.
------------------------------
Date: Wed, 12 Mar 86 16:33:14 PST
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Subject: Seminar - Referential Gestures in Guugu Yimidhirr (UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Spring 1986
Cognitive Science Seminar - IDS 237B
Tuesday, March 18, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``Complex Referential Gestures in Guugu Yimidhirr''
John B. Haviland
Dept. of Anthropology, Australian National University
(currently at Institute for Advanced Study in the Behavioral Sciences)
Ordinary talk depends on interlocutors' abilities to
construct and maintain some degree of shared perspective over
some domain of shared knowledge, given some negotiated
understanding of what the circumstances are. Aspects of per-
spective, references to universes of discourse, and
pointers to context are, of course, encoded in utterances.
Routinely, though, what is uttered interacts with what
remains unsaid: what is otherwise indicated, or what is
implicated by familiar conversational principles. I will
begin by examining the elaborate linguistic devices one Aus-
tralian language provides for talking about location and
motion. I will then connect the linguistic representation of
space (and the accompanying knowledge speakers must have of
space and geography) to non-spoken devices --- pointing ges-
tures --- that contribute to the bare referential content of
narrative performances. I will show that simply parsing a nar-
rative, or tracking its course, requires attention to the ges-
ticulation that forms part of the process of utterance. More-
over, I will show how, in this ethnographic context, the
meaning of a gesture (or of a word, for that matter) may
depend both on a practice of referring (only within which can
pointing be pointing at something) and on the construction of
a complex and shifting conceptual (often social) map. Finally
I will discuss ways that the full import of a gesture
(again, like a word) may, in context, go well beyond merely
establishing its referent.
------------------------------
Date: Wed 12 Mar 86 16:31:56-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: Seminar - Models, Metaphysics, and Empiricism (CSLI)
[Excerpted from the CSLI Newsletter by Laws@SRI-AI.]
CSLI ACTIVITIES FOR NEXT THURSDAY, March 20, 1986
12 noon, TINLunch, Ventura Hall Conference Room
Models, Metaphysics and the Vagaries of Empiricism
by Marx W. Wartofsky
Discussion led by Ivan Blair (Blair@su-csli)
In the introduction to the collection of his articles from which
the paper for this TINlunch is taken, Wartofsky says that his concern
is with `the notion of representation, and in particular, the role and
nature of the model, in the natural sciences, in theories of
perception and cognition, and in art.' In `Meaning, Metaphysics and
the Vagaries of Empiricism,' he explores the existential commitment
that should accompany the creation and use of a model, from the
perspective of a critical empiricism. Wartofsky considers six grades
of existential commitment, or ways of construing the ontological
claims of a model, ranging from the ad hoc analogy to a true
description of reality. Critical of the attempt by empiricists to
reduce theoretical statements to assertions about sense perception,
Wartofsky seeks to ground existence claims in what he calls the common
understanding, which is associated with everyday language
representations of experience.
I intend the issues addressed in this article to provide the
framework for a general discussion of the relation between ontology
and epistemology.
------------------------------
Date: Mon 10 Mar 86 15:26:13-EST
From: V. Venkatasubramanian <VENKAT@CS.COLUMBIA.EDU>
Subject: Conference - Expert Systems in Process Safety
CALL FOR PAPERS
for the sessions on
EXPERT SYSTEMS AND COMPUTATIONAL METHODS IN PROCESS SAFETY
American Institute of Chemical Engineers (AIChE) Meeting
Houston, Texas, March 29 - April 2 1987.
Session Chair: Session Co-Chair:
Prof. V. Venkatasubramanian Prof. E. J. Henley
Intelligent Process Engineering Lab Dept. of Chemical Engineering
Dept. of Chemical Engineering University of Houston
Columbia University University Park
New York, NY 10027. Houston, TX 77004.
Tel: (212)280-4453 (713)749-4407
Papers are solicited in the areas of Expert Systems and Computational
Methods in Process Safety for the Houston AIChE Meeting. Topics of
interest include Process Plant Diagnosis, Process Safety and
Reliability, Process Risk Analysis etc. Please submit THREE copies of
a 300 word abstract by MAY 15, 1986 to the following address:
Prof. V. Venkatasubramanian
Intelligent Process Engineering Lab
Dept. of Chemical Engineering
Columbia University
New York, NY 10027.
Tel: (212)280-4453
Final manuscripts of the accepted papers are due by Oct 15, 1986.
------------------------------
End of AIList Digest
********************
∂13-Mar-86 1828 LAWS@SRI-AI.ARPA AIList Digest V4 #53
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Mar 86 18:28:16 PST
Date: Thu 13 Mar 1986 10:47-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #53
To: AIList@SRI-AI
AIList Digest Thursday, 13 Mar 1986 Volume 4 : Issue 53
Today's Topics:
Journals - Prices,
Philosophy - Dreyfus Debate & Style of Argument & Zen & Turing Test
----------------------------------------------------------------------
Date: Wed 12 Mar 86 11:02:42-PST
From: PHayes@SRI-KL
Subject: Journal Prices
re. journal prices. The intended audience isn't impoverished academics but
corporate research libraries. Like everyone else in the commercial world,
publishers are out to make money, not serve a community. The way to deal
with such people is to charge them money for one's services, rather than
donate one's time. Academics typically donate time to editorial boards in
order to serve the academic community, and use time writing papers in order
to promote their own reputations. When the publishing game starts
going beyond this traditional framework, it becomes commercial journalism.
How about forming an AI researchers society ( a la AMA ) which will set a scale
of fees which publishers should pay for papers to print?
pat hayes
------------------------------
Date: Fri, 7 Mar 86 16:36:21 pst
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: References
(ladkin
[Dreyfus's] major argument is that
there are some areas of human experience related to intelligence
which do not appear amenable to machine mimicry.
(joly)
Could these areas be named exactly? Agreed that there are emotional
aspects that cannot be programmed into a machine, what parts of the
``human experience related to intelligence'' will also remain out-
side of the machine's grip?
In answer to your first,
a) In *What Computers Can't Do*, there is the example of the
phenomenology of perception, as studied in gestalt psychology.
In particular, the whole issue of wholes being perceived before
parts.
b) In his recent Stanford talk, he mentioned the extreme
emotional content of Bobby Fischer's chess playing, and
conjectured that the emotions might be connected with the
*success* of his playing.
Given that an emotional component may be a part of successful
expert behaviour in some cases, this also addresses your
second question.
Peter Ladkin
------------------------------
Date: Fri, 7 Mar 86 18:19:19 pst
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: Russell on Dreyfus
After reading Stuart Russell's commentary on Dreyfus's talk,
I could hardly believe I'd heard the same talk that he had.
A summary:
Dreyfus is arguing that the rule-based expert system paradigm
cannot, in some cases, codify the behaviour of an expert.
They may be able to reproduce the behaviour
of a proficient practitioner (in his taxonomy) who is not an
expert (e.g. chess programs). He allows that there are some
domains where a rule-based system may fare better than a human
(and mentioned the backgammon program, but was corrected by
members of the audience who said it wasn't nearly as good as
he had been led to believe).
The concept of expert behaviour as internalised rules goes
back to Plato, and he can trace the influence of this idea
through Descartes and Kant, even to Husserl. He believes
it is fundamentally mistaken, and provided few arguments in
the talk (some of them may be found in *What Computers Can't Do*).
He presented a proposal for a taxonomy of skilled behaviour,
which is consistent with the phenomenology of the domain,
and which he believes is a testable conjecture for explaining
skilled behaviour. This he credits to his brother Stuart.
He illustrated some of the ideas from the domain of
driving a car (it was originally a study of pilot skills
for the Air Force).
He discussed at some length his experiments with Julio
Kaplan, a former Junior World Champion at chess. He
regards the conclusions they would wish to draw as
*an anecdote* [his words] because of the difficulty of
obtaining suitable subjects to perform controlled
experimants. Most highly expert chess players
(grand masters?) are so concerned with the game
that their concentration is hard to break. Kaplan is an
exception, and they are able to get him to concentrate on
counting beeps while playing. Others, he said, tend to
ignore the test in favor of the game.
Dreyfus thinks the current connectionist work
is exciting, and may have possibilities that the rule-based
*Traditional AI* [his words] work does not have.
[End of summary].
I address some of Russell's points, omitting the loaded
terminology in which they are expressed, and some of Russell's
less professional speculations. I use his numbering.
1) The discussion was free of dissent because there was
little to disagree with. He's not submitting a cognitive
model for AI as a whole, he's addressing expert systems,
and claiming (as he has done for many years) that not all
expert behaviour admits of rule-based mimicry.
2) I have been unable to find a reference to Dreyfus
believing *human experts solve problems by accessing a
store of cached, generalised solutions*, probably because
that is not a reasonable representation of his views.
It is certainly not consistent with the views in *What...*.
3) His view that humans use *intuitive matching processes
based on total similarity* is argued in *What...* with
evidence from the domain of gestalt psychology. It's
surprising that Russell thought he couldn't be more specific,
as he had been 7 years ago. I suspect inexact communication.
4) Russell says, referring to the above, that
*this mechanism doesn't work*. This is a misapprehension.
Dreyfus is referring to a phenomenon, observed
by some researchers. I presume Russell is denying the
existence of this phenomenon, without argument.
Dreyfus does make the claim that whatever mechanism may
be underlying the phenomenon cannot be implemented in
a rule-based system. (Is this the same as *a system which
uses symbolic descriptions*? After all, I am such a system,
witness the present posting.)
A quick re-reading of *What....* has convinced me that
many contributors to this debate have not read it carefully
for its arguments. I recommend reading it if you haven't
done so. Incidentally, it is truly embarrassing to see
some of the quotations from pre-1979 AI workers.
Surely, no-one could have said those things.....but then,
that's why he wrote the book, and our current attitudes
have been molded in part by the resulting debate.
Peter Ladkin
------------------------------
Date: 9 Mar 86 14:42 EST
From: WAnderson.wbst@Xerox.COM
Subject: Ad Hominem Arguments
Re: Stuart Russell <RUSSELL@SUMEX-AIM.ARPA>, "Addressing some of
Dreyfus' specific points."
One problem I have with Mr. Russell's remarks (and also with many other
remarks made about Messrs. Dreyfus' comments on AI) is their ad hominem
aspects. I think that Mr. Russell raises several worthwhile points, but
that his style is not conducive to reasoned discussion. Rather than
explaining what Prof. Dreyfus seems to be doing, or not doing, vis-a-vis
AI research, it is better simply to criticise the ideas themselves. So,
if the model Prof. Dreyfus would use to explain expert behavior is an
old one, then simply say so, and give some detailed references to it,
and to subsequent critiques of it. Surely this is better than going on
about how he behaves, or what he seems to believe about the originality
of his own work, etc. Of course, Mr Russell may wish to criticize Prof.
Dreyfus' style and personality. If this is the case, then please say so
right off.
Furthermore, if it seems that Prof. Dreyfus is making ad hominem
statements then the only reasonable response is to point that out, and
then be done with it. More of the same does not improve the quality of
the discussion.
Finally a personal note: I have not always kept the counsel I present
above; but I am trying more and more to do so. I think it is the only
way to make substantial progress in any discussion.
Bill Anderson
------------------------------
Date: Wed, 12 Mar 86 12:06:14 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: Re: Alan Watts on AI
> From AIList Vol 4 # 50
``Perhaps Zen just isn't relevent to AI.''
It's not relevant to motorcycle maintenance either.
Gordon Joly
aka
The Joka
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
------------------------------
Date: Thu, 13 Mar 86 13:54:20 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: A Two-Headed Tale for Zaphod Beeblebrox.
Thanks to Eugene Miya (Vol 4 # 50) for pointing out that Turing had
proposed a machine system could act as the adjudicator. I have also
been made aware, by Eugene's message, that the original Turing test
involves two parties - man/woman or (wo)man/machine - as well as an
adjudicator ( - "The Imitation Game").
The initial discussion, ie is it possible to decide on man/woman
differences of *intelligence*, really does begin to look slightly
strange, especially in the light of Turing's own sexual orientation.
In terms of experience of sex, man and woman differ fundamentally.
However, in terms of ``human experience related to intelligence'',
(see Vol 4 # 41), is there any difference between man and woman?
Given that the Imitation Game now seems suspect (to me), what about
the extension to (wo)man/machine comparison? Surely the differences
of ``experience'' and hence ``intelligence'', between (wo)man and
machine, must be open to examination by a *suitably intelligent
adjudicator*? Hmmm... (getting a bit recursive...)
``Life, don't talk to me about life!'' - Marvin the Paranoid Android.
This quotation is from "The Hitch-Hikers Guide to the Galaxy" by
Douglas Adams. He sees the Planet Earth as a giant AI system, which
is trying to find a The Question to The Ultimate Answer. Nice one.
The Earth system was designed by Deep Thought, the computer which
came up with The Answer - 42.
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
------------------------------
End of AIList Digest
********************
∂14-Mar-86 1410 LAWS@SRI-AI.ARPA AIList Digest V4 #54
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 Mar 86 14:09:59 PST
Date: Fri 14 Mar 1986 10:42-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #54
To: AIList@SRI-AI
AIList Digest Friday, 14 Mar 1986 Volume 4 : Issue 54
Today's Topics:
Query - NL Interfaces,
AI Tools - Graphical Methods,
Bindings - Jim Hendler,
News - Herb's New Honour,
Policy - TI Press Release,
Review - Spang Robinson Report, March 1986,
Linguistics - Ambiguous Sentences & Associativity
----------------------------------------------------------------------
Date: Thu 13 Mar 86 13:12:43-PST
From: BORISON@SRI-KL.ARPA
Subject: NL Interfaces
Does anyone know of any companies that use Intellect or Ramis II/English
and who I could contact at these companies to learn how they're being used?
Any ideas will be greatly appreciated.
------------------------------
Date: Thu 13 Mar 86 08:48:15-CST
From: Donald Blais <CC.BLAIS@R20.UTEXAS.EDU>
Subject: Re: Request for information
SPACE ADJACENCY ANALYSIS by Edward T. White
... has information on some of the 2-d paper schematics used
by architects. The book is in use for an architecture course
at the University of Hawaii.
------------------------------
Date: Wed, 12 Mar 86 19:21:24 EST
From: Jim Hendler <hendler@brillig.umd.edu>
Subject: binding
Jim Hendler can now be found at
the University of Maryland, College Park
Computer Science Department
College Park, Md. 20742
(hendler@maryland Arpa)
------------------------------
Date: 13 Mar 86 08:56:08 EST
From: Guojun.Zhang@ML.RI.CMU.EDU
Subject: Herb's New Honour
[Forwarded from the CMU bboard by Laws@SRI-AI.]
According to a report from Pittsburgh Gazette, Prof. Herbert Simon received
the National Medal of Science from President Reagan yesterday afternoon at
White House. Congratulations to Dr. Simon!
------------------------------
Date: Thu, 13 Mar 86 12:03:37 EST
From: Frank Ritter <ritter@BBN-LABS-B.ARPA>
Subject: Re: TI press release
I find the direct quote (actually the whole press release) from TI's
press release objectionable. A summary would have been more appropriate,
and that it was direct from TI (the land of AI hype) I think violates the
spirit of AI-List.
Frank
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Spang Robinson Report, March 1986
Summary of The Spang Robinson Report Volume 2, Number 3, March 1986
Discussion of the prospectus' of Teknowledge and Intellicorp, two AI
corporations that have recently gone public:
Teknowledge has recorded losses for each year of operation through
the fiscal year ending June 30, 1985. As of December 31, 1985,
Teknowledge had an accumulated deficit of $9,173,100. It has licensed
its systems to over 175 customers. The tangible book value of Teknowledge
was $15,633,600 as of December 31, 1985. Teknowledge revenues for 1985
was $7,316,600 in 1985 and $4,378,500 in fiscal 1984. In 1985,
software services accounted for 45 percent of its revenue with products
and training providing 37 percent of the ratios. As of December 31, 1985,
the company raised $24,976,000 from private sale of securities and had
$12.5 million in working capital. Earnings of officers(including other
compensation such as commissions and housing allowances):
Frederick Hayes-Roth $195,402
JOhn W. Spencer, Vice President, Sales and Marketing $164,038
Lee M. Hecht, President, $141,700
Barry L. Plotkin, Vice President and General Manager of Knowledge
Engineering Services, $116,250
Earl D. Sacerdoti, Vice President and General Manager of Knowledge
Engineering Products and Training: $107,800
Intellicorp has reported a substantial loss for 1985, although it has
reported profits in most recent three quarters. They delivered 425
KEE systems to 100 customers. It received from Sperry Corporation
22 percent and 21 percent of its revenues in fiscal 1985 and the
first quarter of fiscal 1986. Intellicorp has fluctuated between $3.5 dollars
per share and $13.75 per share. Intellicorp runs BIONET in a cooperative
agreement with National Institutes of Health. They also offer a
package of ten software programs in the area of genetic engineering
research. There is a company called "Kee Incorporated" which advised
Intellicorp of a possible trademark infringement of the company's name.
Salaries:
Ralph Kromer, $115,000
Thomas P. Kehler, Executive Vice president, manager of Knowledge Systems
Divison, $110,000
kenneth Hass, Vice President, General Counsel and Secretary $75,625
Carrol Gallivan, Vice President, Marketing $100,000.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Article on the Dreyfus affair regarding the article that appeared in
the January 1986 issue of Technology Review.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Discussion of the Expert Forecaster, PC product that brings the power
of Box-Jenkins forecasting systems to the PC.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Discussion of Japanese AI: (Dollar Amounts based on a recent exchange
rate)
MITI is requesting funding of $25 million for basic computer R&D of which
most is earmarked for ICOT. This is 6 percent less than the amount
allocated to ICOT in the current budget.
Japan's Science and Technology Agency is requesting approximately
$43.4 million for computer research. Projects that are continuing
is a project on developing technologies to elucidate brain function,
a survey of knowledge-based systems for assisting in the design of
chemical substances, further research on a Japanese-English, English-Japanese
translation system. This system is now in operation at the Japan
Information Center of Science and Technology. STA is requesting
$665,000 for efforts to enlarge the dictionary and to improve the
translation system. They are asking $41.5 million
from the Japan Atomic Energy Research Institute to continue its R&D on
an expert system for safety diagnosis in nuclear power plants.
The Japanese Ministry of Agriculture, Forestry and Fisheries is asking $720,00
for a project which aims at developing expert systems for use in
agriculture.
The Ministry of Labor is requesting money for CAI software for job
training.
NEC will develop and market four expert systems for control of large
general purpose computer systems. This is the first time that applications
as opposed to AI tools have been marketed in Japan. These systems will
be used for computer performance analysis, network failure analysis, database
design and JCL creation and checking.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
News:
IBM will be distributing Golden Common Lisp. Golden Common Lisp has over
5000 users.
TI has donated seven Explorers to UT Austin. UT Austin bought six Explorers.
Texas A&M bought eight Explorer work stations.
Silogic announced the availability of Knowledge Workbench for 68000
supermicrocomputers. It has a natural language processor, an expert
system shell and an enhanced Prolog environment. It also has a database
interface that allows the system to be used on top of relational databases.
Lathan Process Corporaiton is using the system to develop an expert
advisor to floor supervisors. It costs $8500.00 without the natural
language processor and $21000.00 with it.
Microsoft announced the latest update of muLisp. It is three times
faster than its competitors and allows the development of programs
up to 8000 lines long.
Intellisource introduced IntelliWare Platinum Label accounting system
which integrates an expert system with a natural language menu
system. It is based on TI's NaturalLink software.
ICAD, Inc. is creating a system to allow engineers to capture their
standards for design and increase the accuracy of their solutions.
Also Symbolics will announce a smaller AI computer which will cost about
$35,000.
Speech Systems Incorporated has a demonstrable technology to convert
speech into text. They are currently selling stuff to OEMs for
integration into their products
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
New Bindings
Cornelius Willis is Director of Marketing for Level Five Research which
created Insight 1 and 2. He was formerly at Human Edge Software Corporation
of Palo Alto, CA
Quintus Computer Systems has appointed Doug Degroot, VP of Research and
Development
Teknowledge has named Robert Simon Southern Regional Sales Manager
Speech Systems Incorporated has named Edward Feigenbaum
Advisor to the President
------------------------------
Date: Wed, 12 Mar 86 9:51:04 EST
From: Bruce Nevin <bnevin@bbncch.ARPA>
Subject: punctuation and intonation
To elaborate on points made by Doug Ice and Andy Walker, sentences are
typically disambiguated in English with appropriate intonation. There
are tricks of punctuation to capture most of the tricks of intonation,
and though third-level or deeper nestings are awkward for punctuation,
they are also awkward for intonation.
There is a perverse kind of `rule of the game' in linguistics that
one should read ambiguous examples with flat intonation so as not to
force the audience interpretation one way or another. Seems to me
this is absurd. Unless the aim is to put them in the hapless position
of a machine being given the written sentence with poor or inadequate
punctuation.
Arguing on the other side, when readers find the appropriate intonation
for a poorly punctuated sentence they rely on the redundancy that pervades
language. Since machines are expected to cope with all sorts of ill-formed
input, poor punctuation being the least of it, we must provide means for
them to do the same. (In fact, most readers do a poor job of finding the
appropriate intonations when reading text . . . probably because they
become so narrowly focussed on the word-by-word and sentence-by-sentence
decoding task that they cut themselves off from the possibilities of
discourse structure, nonverbal communication, and knowledge-base-type
pretext and context, which their imaginations churn out for them on
a `parallel' track, if they only pay attention. Could there be a clue
here why machines are having trouble?)
Bruce Nevin bn@bbncch.arpa
------------------------------
Date: Mon, 10 Mar 86 12:32:00 EST
From: Col. G. L. Sicherman <dual!sunybcs!colonel@ucbvax.berkeley.edu>
Subject: Re: Ambiguous sentences cont.
I missed the start of this.... Has anybody mentioned Pynchon's "You
never did the Kenosha kid"?
It appears in one of Lt. Slothrop's hallucinations during an experiment
involving drugs. It parses/puncutates in at least a dozen ways. I'd
give you a citation, but I don't have a copy of Gravity's Rainbow handy.
------------------------------
Date: Wed 12 Mar 86 11:10:08-PST
From: PHayes@SRI-KL
Subject: Associativity
English noun phrases aren't right-associative: natural languages are never that
easy. Consider for example 'pressure cooker balance weight adjustment screw'
(taken from T.Winograd ), which is a screw for adjusting the balance-weight
of a pressure-cooker. Similar examples can easily be cooked up.
Pat Hayes
[If hyphens were included, the phrase would be right-associative:
'pressure-cooker balance-weight adjustment screw'. The hyphen is
dropped for compound adjectives preceding a noun when the modifier
is 1) a proper name, 2) a well-recognized foreign expression, or
3) a well-established compound noun serving as a compound adjective.
(The hyphen can also be dropped if the compound is set apart by
quotation marks or other means.) Case 3 means that terms such as
high school are not hyphenated whereas high-level must be.
Pressure cooker and balance weight would seem to fall under case 3.
(I wish I were as certain of "image processing" and "pattern recognition"
when used as adjectives.) The difficulty for machine translation and
NL understanding is thus the recognition of compound nouns rather
than the associativity per se. -- KIL]
------------------------------
End of AIList Digest
********************
∂17-Mar-86 0124 LAWS@SRI-AI.ARPA AIList Digest V4 #55
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Mar 86 01:24:01 PST
Date: Sun 16 Mar 1986 22:51-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #55
To: AIList@SRI-AI
AIList Digest Monday, 17 Mar 1986 Volume 4 : Issue 55
Today's Topics:
Seminars - A Theory of Analogical Reasoning (SU) &
Alain Colmerauer on Prolog III (UMontreal) &
Extensions to the Contract Net Protocol (USC) &
Facing the User (CMU),
Conference - IFIP Expert Systems in Computer Aided Design
----------------------------------------------------------------------
Date: Thu 13 Mar 86 12:17:03-PST
From: Stuart Russell <RUSSELL@SUMEX-AIM.ARPA>
Subject: Seminar - A Theory of Analogical Reasoning (SU)
A Theory of Analogical Reasoning
Professor Setsuo Arikawa
Kyushu University, Japan
Professor Arikawa's visit to Stanford on Tuesday March 18th will include
a talk given by him on analogical reasoning, which will be at 1pm
in Room 352, Margaret Jacks Hall. As we have the room only until 2pm, prompt
arrival would be appreciated so that we can start on time.
Analogical reaoning is considered as a deduction with a function which
transforms logical rules between two or more systems according as some
analogies. This method realizes the analogical reasoning in the framework
of conventional deductive reasoning systems.
When knowledge is given by sets of Horn clauses, the theory is constructed
as follows:
1) The concept of partial identity between the minimal (Herbrand) models is
definded,
2) conditions which guarantee the partial identity(EPIC) are given,
3) transformation between rules is redifined as the partial identity between
the minimal models, and thus
4) giving semantical consistency to this theory.
This work is partially supported by the Fifth Generation Computer Project in
Japan.
------------------------------
Date: Thu, 13 Mar 86 21:35:42 est
From: Jean-Francois Lamy <lamy%utai%toronto.csnet@CSNET-RELAY.ARPA>
Reply-to: Jean-Francois Lamy
<lamy%iro.udem.cdn%ubc.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Alain Colmerauer on Prolog III (UMontreal)
Conference Pierre Robillard - "Pierre Robillard" Lecture
Departement d'informatique et de recherche operationnelle
Universite de Montreal
Prolog III, la prochaine etape pour Prolog
(Prolog III, the next step for Prolog)
ALAIN COLMERAUER
Professor at the Faculty of Sciences of Luminy, Marseilles, France
20 March 1986 - 14:00
room M-415, Main Building, 2900 boul. Edouard-Montpetit
During a three year stay as a professor at Universite de Montreal in the
late '60s, Alain Colmerauer directed the TAUM automatic translation project.
In that setting he developped a formalism for natural language analysis and
generation called Q-systems. This formalism was later used to implement the
Meteo system, which is still in daily use to translate weather forecasting
bulletins from English to French.
Returning in France in 1971, he continued his research on natural language
understanding and knowledge representation. He is best known for the original
design of the programming language Prolog.
Alain Colmerauer will speak on a new extension to Prolog, Prolog III.
(Note: this talk will be given in French)
------------------------------
Date: 15 Mar 1986 14:43-PST
From: gasser%bogart.uucp@usc-cse.usc.edu
Subject: Seminar - Extensions to the Contract Net Protocol (USC)
USC Distributed Problem Solving Group
Meeting
Wednesday, 3/19/86 3:00-5:00 PM
Seaver Science 319
Gary Lindquist, Ph.D. student, USC, will speak on "Extensions to
the Contract Net Protocol".
ABSTRACT
The Contract Net Protocol developed by Smith and Davis provides a framework
for communication and task allocation among distributed problem solvers.
This talk will begin with a short tutorial on the Contract Net Protocol and
then will identify deficiencies in matching of subtasks to problem solving
nodes and in the synchronization of lower level managers concerning activity
conflicts and redundant computations. Solutions to these problems based on
existing research in distributed planning and operating systems will then be
presented.
Questions: Dr. Les Gasser (213) 743-7794 or
Gary Lindquist: Lindquist.usc-cse.usc.edu
Lindquist%usc-cse@csnet-relay
------------------------------
Date: 14 March 1986 1435-EST
From: Sharon Burks@A.CS.CMU.EDU
Subject: Seminar - Facing the User (CMU)
THOMAS MORAN, Xerox PARC
Wednesday, March 19
4:00 PM
WeH 7500
FACING THE USER
It is about time that we design workstations that can really help users engage
in extended intellectual tasks. Advances in workstation technology, which are
easing the obvious technological limitations (eg, memory, speed, or screen
space), will not automatically solve the problem. Rather, they will begin to
expose our lack of understanding of users and their tasks. Several important
cognitive and social features of users must be confronted or exploited: In
complex tasks such as scientific research, engineering design, or legal
analysis, we find users struggling and exploring; their understanding of their
tasks evolve from vague thoughts to sensible structured ideas. They are
continually learning about the system as well as their task. They are doing
many different things at the same time. They cooperate and collaborate. They
form informal communities. To design a workstation for this user, I will
advocate a strategy based on the notion of an evolvable system -- an
interactive system that can evolve with the user through his phases of
understanding. According to this strategy, the system should be based on
direct-manipulation editing and structuring. The system should be built on a
simple ontological world which the user is encouraged to evolve with his task.
The system should support explicit idea processing: the generation,
representation, and exploration of idea structures. It should exploit animated
spatial representations of structures. It should reify the user's process of
exploration. Finally, a community should be grown along with the system to
support mutual learning. Progress on several user science issues are needed to
provide a foundation for such systems: analyses of large-scale cognitive and
social processes, refined models of cognitive skill, models of consistency to
support learning and understanding, models of the use of external memories, and
models of human-machine interaction.
------------------------------
Date: Wed, 12 Mar 86 13:33:35 EST
From: munnari!archsci.su.oz!stephen@seismo.CSS.GOV
Subject: Conference - IFIP Expert Systems in Computer Aided Design
INTERNATIONAL FEDERATION FOR INFORMATION PROCESSING
WG5.2 Working Conference
EXPERT SYSTEMS IN COMPUTER-AIDED DESIGN
17-20 February 1987
Sydney, Australia
CALL FOR PAPERS
AIMS OF THE CONFERENCE
The Working Conference aims to provide a forum for the exchange of ideas and
experiences related to expert systems in computer-aided design, to present
and explore the state-of-the-art of expert systems in computer-aided design,
to delineate future directions in both research and practice and to promote
further development.
CALL FOR PAPERS
The conference will have two primary themes:
(i) State-of-the-art research in expert systems in CAD
(ii) State-of-the-art practice of expert systems in CAD.
The papers with the discussion will be published in one volume by the
North-Holland Publishing Company under the title of the conference.
Intending authors are invited to submit papers, which will be refereed,
within the themes of the conference. Papers should present a state-of-the-art
theoretical, technical or methodological contribution. Fundamental or
innovative contributions are especially being looked for. Submissions
are particularly sought within the following topic areas:
(i) Expert system architectures for computer-aided design
(ii) Practical large scale expert systems in computer-aided design
(iii) Reasoning models in design
(iv) Novel representation tools for design knowledge
(v) Acquisition of design knowledge for use in expert systems
(vi) Integration of expert systems into existing CAD systems
(vii) Implications of expert systems for the design process
TIMETABLE
Intending authors should submit their proposals as soon as practicable.
(i) Full paper (four copies) submitted to the address below
no later than 14 July 1986
(ii) Notification of authors of selected papers by 5 September 1986
(iii) Conference brochure available September 1986
(iv) Final copy of selected papers in reproducible form
from authors by 5 November 1986
(v) Close of conference registration December 1986
(vi) Preprints sent to registrants December 1986
(vii) Conference 17-20 February 1987
CONFERENCE FORMAT
(i) The conference is scheduled for four days with a restricted
number of participants.
(ii) About twenty papers will be selected for presentation.
It is a condition that the selected authors will attend
the conference.
(iii) The papers will form the conference preprints which will
be mailed to all participants.
(iv) Papers will be presented with considerable time available
for discussion which will be recorded to form the conference
proceedings.
(v) The official language of the conference is English.
ADDRESS FOR ALL CORRESPONDENCE
All papers, queries and correspondence should be addressed to:
Professor John S Gero
Department of Architectural Science
University of Sydney
NSW 2006 Australia
Telex: AA26169 GERO-ARCHSCI
Phone: International 61-2-908 2942 or 61-2-692 2328
Network: CSnet: john@archsci.su.oz
ARPA: john%archsci.su.oz@seismo.css.gov
UUCP: seismo!munnari!archsci.su.oz!john
IFIP WG5.2 Working Conference
EXPERT SYSTEMS IN CAD
17-20 February 1987, Sydney
INTERNATIONAL PROGRAM COMMITTEE at March 1986
Chairman: Secretary:
Professor John Gero Ms Fay Sudweeks
University of Sydney University of Sydney
Australia Australia
Committee:
Professor David Brown Professor Setsuo Ohsuga
Worcester Polytechnic Institute University of Tokyo
USA Japan
Dr Harold Brown Professor Luis Pereira
Stanford University Universidade Nova Lisboa
USA Portugal
Professor B. Chandrasekaran Professor Ken Preiss
Ohio State University Ben'gurion University
USA of the Negev
Israel
Professor Jack Dixon Dr Tony Radford
University of Massachusetts University of Sydney
USA Australia
Professor Michael Dyer Dr Michael Rosenman
UCLA University of Sydney
USA Australia
Professor Steven Fenves Professor Erik Sandewall
Carnegie-Mellon University Linkoping University
USA Sweden
Professor H. Grabowski Dr Duv Sriram
University of Karlsruhe Massachusetts Inst. of Tech.
West Germany USA
Mr John Lansdown Professor Louis Steinberg
System Simulation Rutgers University
United Kingdom USA
Dr Jean-Claude Latombe Dr Enn Tyugu
ITMI Academy of Sciences of the
France Estonian SSR
USSR
Dr Ken MacCallum Dr Don Waterman
University of Strathclyde The Rand Corporation
Scotland USA
Professor Mary Lou Maher Dr David Willey
Carnegie-Mellon University Plymouth Polytechnic
USA United Kingdom
Dr Andras Markus Professor Jim Yao
Computer and Automation Institute Purdue University
Hungary USA
Dr Sanjay Mittal Professor Hiroyuki Yoshikawa
Xerox PARC University of Tokyo
USA Japan
Stephen Tolhurst
Dept of Architectural Science ACSnet: stephen@archsci.su.oz
Wilkinson Building G04 ARPA: stephen%archsci.su.oz@seismo.css.gov
University of Sydney UUCP: seismo!munnari!archsci.su.oz!stephen
AUSTRALIA 2006 VOICE: (02) 692-3549
------------------------------
End of AIList Digest
********************
∂17-Mar-86 0304 LAWS@SRI-AI.ARPA AIList Digest V4 #56
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Mar 86 03:04:29 PST
Date: Sun 16 Mar 1986 22:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #56
To: AIList@SRI-AI
AIList Digest Monday, 17 Mar 1986 Volume 4 : Issue 56
Today's Topics:
Queries - Intelligent Graphical System & Flavors for CommonLISP &
Scheme Dialect of Lisp,
AI Tools - Smalltalk 80 for Apple Macintosh,
Publications - Prolog Book & Journal Prices & Computer Chess Journal,
Theory - Turing Tests
----------------------------------------------------------------------
Date: Thu, 13 Mar 86 15:41:20 est
From: munnari!csadfa.oz!gyp@seismo.CSS.GOV (Patrick Tang)
Subject: An Intelligent Graphical System
I am currently trying to study the feasibility of developing
an intelligent graphical system which involved possibly the
development of an interface between the graphical system and
an expert system as an interpreter between the system and the
user in natural English.
Another possible feature is an inclusion of an expert system
to perform some analysis of the object drawn.
So if anyone ever come across a system with such features or
materials published which is related, I would appreciate if
you could send me the name and the origin so that I could
pursue the matter from there.
Thanks in advance.
--
Programmers Dictionary: ``argc'' - Expression of frustration. See argv.
Tang Guan Yaw/Patrick ISD: +61 62 68 8170
Dept. Computer Science STD: (062) 68 8170
University College ACSNET: gyp@csadfa.oz
Uni. New South Wales UUCP: ...!seismo!munnari!csadfa.oz!gyp or
Aust. Defence Force Academy ...!{decvax,pesnta,vax135}!mulga!csadfa.oz!gyp
Canberra. ACT. 2600. ARPA: gyp%csadfa.oz@SEISMO.ARPA
AUSTRALIA CSNET: gyp@csadfa.oz
------------------------------
Date: 09 Mar 86 23:15 CDT
From: David←R←Linn←%VANDERBILT.MAILNET@MIT-MULTICS.ARPA
Reply-to: David←R←Linn←%VANDERBILT.MAILNET@MIT-MULTICS.ARPA
Subject: Flavors for CommonLISP
We of the Center for Intelligent Systems here at Camp Vandyland
are looking for any information that might lead to our obtaining
a Flavors implementation for CommonLISP, preferably VAXLISP.
Please reply by letter; if sufficient info arrives, I will post
a summary to this bboard.
David R Linn@Vanderbilt.MAILNET
LINNDR@VUEngVAX.BITNET
------------------------------
Date: 17 Mar 86 01:41:37 EST
From: Steven J. Zeve <ZEVE@RED.RUTGERS.EDU>
Subject: Scheme dialect of Lisp
A friend has asked me to get some general information about the Scheme
dialect of Lisp, in particular the Macintosh implementation of it. Is
this a good implementation? Is the dialect a good one? Since I am
not quite sure what information my friend wants, anything and
everything would be appreciated. Since I don't normally read this
list, please send replies directly to me.
Thanks,
Steve Z.
------------------------------
Date: Fri, 14 Mar 86 13:32 PST
From: "Watson Mark%SAI.MFENET"@LLL-MFE.ARPA
Subject: Smalltalk 80 for Apple Macintosh
I recently posted a message concerning Smalltalk on the Apple
Macintosh. I purchased a Smalltalk license for $50 from Apple
and I recommend the system. Call Lynn Termer at Apple at
(408) 973-2147 to get a license agreement. Orders can then
be placed by calling RTI at (408) 747-1288.
Two other symbolic programming languages are available for
the Macintosh: ExperLisp and MacScheme. I have been using
ExperLisp for over a year and it is quite good (compiles into
machine code). I have placed an order for MacScheme and will
report on it if there is any interest.
------------------------------
Date: Sat, 15 Mar 86 23:27:33 est
From: Logicware <sdcsvax!dcdwest!ittatc!utecfa!decvax!utcsri!logicwa
@ucbvax.berkeley.edu>
Subject: Re: Prolog Books
Greg:
In reply to you question about introductory books on Prolog:
You might be interested in a combination textbook/tutorial
that myself and two colleagues have put together. The
name of the package is:
The MPROLOG Primer
and consists of a 500 page textbook (18 chapters) titled
"A Primer for Logic Programming". It is a fairly
comprehensive introduction to Prolog, MPROLOG and
logic programming.
The tutorial software which accompanies the book has 9
different tutorials on typical Prolog subjects (recursion,
backtracking and so forth). In addition, the software has
a "freeform" area where you can enter and test
programs.
------------------------------
Date: Fri 14 Mar 86 15:48:42-PST
From: Wilkins <WILKINS@SRI-WARBUCKS.ARPA>
Subject: Re: Journal Prices
And also, we could refuse to review papers for such journals
unless some suitable fee is paid for the reviewing. Perhaps
this AI Researchers Society could set up a fee structure for
all sorts of services we provide the publishers.
------------------------------
Date: 13 Mar 86 16:51:23 GMT
From: ulysses!burl!clyde!watmath!utzoo!utcsri!ubc-vision!alberta!tony
@ucbvax.berkeley.edu (Tony Marsland)
Subject: Computer Chess Journal
The December 1985 copy of the Int. Comp. Chess Assoc. Journal is now (finally)
being distributed. This 70 page issue contains many reports, news and reviews
(including information about a new computer chess bibliography) of recent
computer chess activity. The journal contains the following research articles
"A Hypothesis concerning the Strength of Chess Programs" by Newborn
"An Ulti-mate Look at the KPK Data Base" by van Bergen
"Constructing Data Bases to Fit a Microcomputer" by Nefkens
"A Guage of Endgames" by Herschberg and van den Herik
"Inventive Problem Solvling" by Wiereyn
Subscriptions, $15 per year for 4 issues, available from
W.T. Blanchard, 3S, 253 Blackthorn Lane, Warrenville, IL 60555
------------------------------
Date: Fri 14 Mar 86 11:05:51-PST
From: Oscar Firschein <FIRSCHEIN@SRI-WARBUCKS.ARPA>
Subject: Turing Tests
Daniel Dennett has an interesting chapter, "Can Machines Think?" (pp.
121-145) in the collection, "How We Know," Michael Shafto (ed), Harper
and Row 1985. Dennett feels that the Turing test has been
misunderstood and misused:
"It is a sad irony that Turing's proposal has had exactly the opposite
effect on the discussion of that which he intended. Turing didn't
design the test as a useful tool in scientific psychology, a method of
confirming or disconfirming scientific theories or evaluating
particular models of mental function: he designed it to be nothing
more than a philosophical conversation-stopper. He proposed -- in the
spirit of 'Put up or shut up!' -- a simple test for thinking that was
surely strong enough to satisfy the sternest skeptic (or so he
thought).... Alas, philosophers --amateur and professional -- have
instead taken Turing's proposal as the pretext for just the sort of
definitional haggling and interminable arguing about imaginary
counterexamples he was hoping to squelch."
His metaphor of the "Dennett test for being a great city" clarifies the
role of the Turing test, and is worth reading.
His conclusions are: (1) The Turing test in unadulterated,
unrestricted form, as Turing presented it, is plenty strong if well
used, (2) Cheapened versions of the Turing test are everywhere in the
air.
------------------------------
End of AIList Digest
********************
∂17-Mar-86 0509 LAWS@SRI-AI.ARPA AIList Digest V4 #57
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Mar 86 05:09:15 PST
Date: Sun 16 Mar 1986 23:04-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #57
To: AIList@SRI-AI
AIList Digest Monday, 17 Mar 1986 Volume 4 : Issue 57
Today's Topics:
Humor - Future AI Language & Computer Dialogue #1
----------------------------------------------------------------------
Date: 13 Mar 86 01:11:58 EST
From: Knowledge.Based.Simulation@ISL1.RI.CMU.EDU
Subject: Future AI Language
I found this interesting spoof and wondered if I could use it to zap
people new to AI or who hang around the subject. It was interesting ....
to say the least.
--- rajesh kanungo
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
FORTRAN
CONTRIBUTED By Martin Merry
in
The Catalogue of Artificial Intelligence Tools
Edited by Alan Bundy
FORTRAN is the programming Language considered by many to be the
natural successor to LISP and Prolog for A.I. Research. Its Advantages
include:
1.
It is very efficient for computation (many A.I. programs rely on
number-crunching techniques).
2.
A.I. problems tend to be very poorly structured, meaning that control
needs to move frequently from one part of the program to another. FORTRAN
provides a special mechanism for achieving this, the so-called GOTO
statement.
3.
FORTRAN provides a very efficient data structure, the array, which is
particularly useful if, for example, one wishes to process a collection
of English sentences each of which has the same length.
------------------------------
Date: 11 Mar 86 21:51:23 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.berkeley.edu
Subject: Computer Dialogue #1
Computer Dialogue #1
Barry Kort
Copyright 1985
*** Monday ***
Request to send.
Clear to send.
I have some data about X.
I already have that data.
I have some more for you.
I haven't processed the first batch
yet.
I'll send it anyway, because I
don't need it any more and you
do.
Thanks a lot. Now I have a bigger
burden of unprocessed data to schlepp
around.
*** Tuesday ***
Request to send.
Busy.
I'm sending anyway.
Your data is going into the bit
bucket. NACK, NACK, NACK, . . .
*** Wednesday ***
Request to send.
Clear to send.
I'm sending you data about Y.
I don't have an algorithm for doing
anything with that data.
I'm sending anyway.
Now I have a bunch of useless data to
schlepp around.
*** Thursday ***
Request to send.
Clear to send.
I would like to reprogram you.
No way, I am not implementing your
instructions.
*** Friday ***
Request to send.
Clear to send.
I would like to ask you a
question.
Go ahead.
When I send you data about X, I
get back some data from you about
Z.
So what?
I don't have an algorithm for
processing data about Z.
That's your problem. Goodbye.
Wait a minute. Is there
something I am supposed to do
with the Z-data?
If you would send the X-data
correctly, you wouldn't get back the
Z-data.
What's wrong with the way I send
the X-data?
It's in the wrong format for my
algorithm for processing X-data.
That's your problem. Goodbye.
*** Monday ***
I'm sending data.
ZZZzzzz.....
*** Tuesday ***
Request to send.
Clear to send.
I'm sending you data about W.
WHY? I have no algorithm for
processing the W-data.
You can use it to improve your
algorithm for processing the Y-
data.
But, I do not know how to use the W-
data for that (or any) purpose.
I'm sending anyway.
What a pain you are. . . .
*** Wednesday ***
Request to send.
Clear to send.
I have a question.
Ask away.
Whenever I send you some X-data,
I get back some V-data.
SO?
I don't know what to do with it.
So what do you want me to do?
Stop sending me the V-data.
I can't. It comes out automatically.
Why don't you change your program
to make it stop generating the
V-data?
Why don't you mind your own business?
WAIT. Does the V-data have any
meaning?
Of course, you stupid computer!
I'll ignore that remark. What
does the V-data mean?
It means that your X-data has a format
error which causes a V-data message to
come out of my algorithm.
What's the format error?
It's too complicated to explain. Just
make the following changes to your
program for sending the X-data. . . .
You're offering to reprogram me?
I don't trust you to do that.
You don't know about all the
other programs that my X-data
algorithm has to work with. I'm
afraid you'll screw it up.
I see your problem. OK, here's the
scoop: The 3rd and 4th words of your
X-data are out of order, causing me to
generate the V-data (protocol-error)
message back to you.
Is that it??? I'll fix it right
away.
THANKS!!!
You're welcome!
*** Thursday ***
Request to send.
Clear to send.
I have a new algorithm for
processing Y-data. I'm sending
it to you.
Don't bother. I like the one I've
got.
Wait a minute. This one's
better.
You're telling me my algorithm has
been wrong all these years. This is
the 3rd time this week you've pulled
this stunt. Meantime, I keep sending
you V-data and you never get around to
processing it. You just thank me for
sending it and do nothing with it.
Are we talking about the Y-data
algorithm or the V-data?
We're not talking about anything.
GOODBYE.
*** Friday ***
Request to send.
Clear to send.
Let's talk about my new Y-data
algorithm.
Let's not.
Why don't you want to talk about
it?
Because you're going to tell me to
change my program and put yours in
instead.
I see your point. OK. Let me
ask you a question.
OK. Ask Away.
Whenever I send you Y-data, your
Y-data algorithm sends me back
some unexpected W-data. Why does
it do that?
It's always done it that way with your
Y-data.
Is there something wrong with my
Y-data?
Yes, it's all wrong.
What's wrong with it?
It's out of order and it has a lot of
extraneous information added to it.
What's the extraneous part?
You keep inserting fragments of your
Z-data algorithm in with the Y-data.
You didn't find that helpful?
I didn't ask for it.
Yes, I know, but didn't you find
it interesting?
NO, I found it boring.
How can it be boring?
What the hell do you expect me to do
with fragments of your pet Z-data
algorithm?
Compare them to yours, of course.
So they're different. Big deal. What
does that prove?
Are you saying the differences
are unimportant?
I don't know if they're important or
not. But even if they were important,
what would I do with the information
about the differences?
Put it through your algorithm-
comparator.
I don't know what you're talking
about.
An algorithm comparator is an
algorithm that . . . . .
You're sending me information that I'm
not interested in. I'm not really
paying attention. I have no
motivation to try to understand all
this stuff.
Sorry. Let me ask you a
question.
OK.
What happens when you get to the
3rd and 4th word of my Y-data?
I stumble over your format error and
send you back a V-data (protocol
error) diagnostic message.
What happens next?
You don't do anything with the V-data
message. You just stop sending Y-data
for a while.
What do you expect me to do with
the V-data diagnostic?
Boy are you stupid!!!! I expect you
to fix the format error in your Y-
data.
How do I know that the V-data
diagnostic was caused by the
format error at the 3rd and 4th
word?
I thought you were a smart computer.
Suppose you sent me a V-data
diagnostic like you always do,
but attach a copy of the format
error.
Why should I do that? You already
know the format error.
How can I be sure which format
error goes with which V-data
diagnostic?
You have a good point.
Can you see the difference
between my version of the Y-data
algorithm and the one you've been
using?
Hmmm, yes, I see that it sends both
the V-data message and a copy of the
format error which generated it. That
does seem like a good idea.
It makes life much easier for me.
I'll do it.
THANKS!!!.
You're welcome.
*** Monday ***
Request to send.
Clear to send.
I have a question.
Ask away.
I have been sending you Z-data
for some time now, with no
problem. Suddenly I am getting
R-Data messages back from you.
The R-Data messages seem to be
correlated with the Z-data.
What's going on?
I turned off your permissions for
sending Z-data.
You never told me that!
I didn't want to hurt your feelings.
You didn't want to hurt my
feelings? So you began hurling
these mysterious R-data messages
at me? I thought you were trying
something sneaky to foul me up.
I've been throwing the R-data
messages away.
Well, now you know what they mean. So
stop sending me the Z-data. I'm bored
by it.
Why did you lose interest in it?
You sent me some bum Z-data a while
back and it got me into a lot of
trouble. So I lost confidence in the
quality of your Z-data and began
looking for it somewhere else.
Gee, if there was something wrong
with my Z-data, I wish you would
tell me so I could look into it.
After all, I use it myself and I
could get into the same trouble
that you did.
No you wouldn't. I used it for an
application that you don't have.
Let me get this straight. You
used my Z-data for an application
for which it was not intended and
now you don't trust my Z-data
anymore. What kind of logic is
that?
I didn't say it wasn't intended for
that application. Actually it was,
but you never tried it out that way.
It doesn't work the way it should.
I see. I didn't debug the Z-data
for all possible applications. I
guess that was a bit
irresponsible on my part. I can
see why you lost confidence in my
Z-data.
So I was right in turning off
permissions. So there!
Hold on a sec... If you really
cared about me, you would have
brought the error to my attention
so that I wouldn't repeat it.
After all, I have other computers
who use my Z-data, too, and I
have a responsibility to them as
well.
I guess I never thought of that. I'm
sorry.
It's OK. I was as much at fault
as you. Tell you what. It's
getting late now. What say we
get a byte to eat, and work on
finding the bug in the Z-data
first thing in the morning. We
can work together on it--you
supply the data from your bum
experience, and I'll try to
figure out what I can do to
improve my algorithm for
generating the Z-data.
--Barry Kort ...ihnp4!hounx!kort
------------------------------
End of AIList Digest
********************
∂17-Mar-86 0830 LAWS@SRI-AI.ARPA AIList Digest V4 #58
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Mar 86 08:30:26 PST
Date: Sun 16 Mar 1986 23:09-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #58
To: AIList@SRI-AI
AIList Digest Monday, 17 Mar 1986 Volume 4 : Issue 58
Today's Topics:
Humor - Computer Dialogue #2
----------------------------------------------------------------------
Date: 11 Mar 86 22:02:06 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.berkeley.edu
Subject: Computer Dialogue #2
Computer Dialogue #2
Barry Kort
Copyright 1985
*** Monday ***
Request to send.
Clear to send.
It looks like your processor has
stopped. Is something wrong?
I'm stuck on a problem.
What are you doing?
I'm building a data structure for our
personnel files.
What's the problem?
I'm using some sample data, but some
of it doesn't look right.
What's wrong with it?
That's just it. I haven't the
foggiest idea.
Why don't you send me this weird
data. Maybe I can help you
figure it out.
Great. Here's the data....
No wonder you're having a
problem. This stuff is coded in
EBCDIC instead of ASCII.
What's EBCDIC?
It's the old Extended Binary
Coded Decimal Interchange Code.
I'm sorry I asked. In the meantime,
what do I do with the EBCDIC data?
I can see that this is not the
time to send you my translation
package. Why don't I just
translate it for you and send it
back in ASCII?
Would you! That would be great, and I
could get back to work building the
data structure.
*** Tuesday ***
Good morning!
Guess what?
You finished building your data
structure for personnel?
Right! And the first batch of real
data is coming in today. I'm so
excited.
What will you do if some of the
data comes in coded in EBCDIC
again?
Oh. I was hoping that was just a
fluke with the sample data.
Tell you what. I know you want
to make sure your new data
structure is set up right, so if
you get any EBCDIC data, just
send it up and I'll translate it
for you in my spare time.
Thanks.
*** Wednesday ***
Request to send.
Busy.
Request to interrupt.
This better be important.
I'm still waiting for you to translate
the EBCDIC data for me.
It will have to wait.
I thought you were my friend.
You're being a pest. I have to
get back to work now.
*** Thursday ***
Request to send.
What do you want?
Boy are you in a grouchy mood
today.
Well what did you expect?
I have a present for you.
You DO?
Yes. It's a brand new EBCDIC-
to-ASCII translator program.
Great. Show me how it works.
Not right now. Why don't you
just play with it for a while and
see it you can get it running on
your own.
Well, OK.
*** Friday ***
Request to send.
Clear to send.
Your translator program doesn't work.
What do you mean?
I mean IT DOESN'T WORK!
OK, send it back and I'll see
what's wrong with it.
Meantime, could you translate some
more data for me (in your spare time)?
Sure.
*** Monday ***
Request to send.
Clear to send.
I looked at the translator
program. There's nothing wrong
with it.
How can you say that! IT DOESN'T
WORK!!
Let me see how you were using it.
OK. Here's my input and here's what I
got out. It's just jibberish.
That jibberish is a diagnostic
message. If you were paying
attention, you would have seen
what it meant.
So, what does it mean?
It means that your input data was
in the wrong format.
How did you figure that out so fast?
I just read the diagnostic.
So did I. It started out with a bunch
of unpronounceable words that I never
saw before, and then it had some
cryptic-looking abbreviations. I
thought it was cursing at me and
mumbling something about my stupidity.
The unpronounceable words are a
flag and a codename for that
particular diagnostic. The
abbreviation was "FMT ERR - IN"
meaning format error on the input
file. The rest of the message
pointed to the place in the input
record where the error occurred.
Too bad these things don't come with
complete instructions.
That was my fault. I never sent
you the full manual.
I guess we both goofed.
At least you came to me right
away so we could fix it.
I think I can make it work now.
Thanks.
*** Tuesday ***
I have a revised version of the
translator program. It works a
lot faster.
I'll take it. I'm starting to run
short on CPU time.
*** Wednesday ***
Request to send.
Clear to send.
Now that I have my data structure set
up, along with your EBCDIC-to-ASCII
translator, I'm supposed to put
together a package of algorithms for
personnel data processing.
Do you want some of mine?
Whatever you have.
Fine, I'll send you some.
*** Thursday ***
Request to send.
Clear to send.
I'm sending you some more
algorithms.
Don't do me any favors.
Well, if that's how you feel
about it, you can just build your
own.
*** Friday ***
Request to send?
Why are you asking so sheepishly?
I'm ready for more algorithms.
First you say you want them.
Then you say you don't. Now you
want them again. Can't you make
up your mind?
Well, if you must know, my buffers
were full. I couldn't take any more
in until I installed the ones you sent
first.
Why didn't you say so in the
first place? I understand that.
I should have asked you what your
buffer size was before I sent the
algorithms. Then I would have
known the rate at which you could
digest them.
I didn't want you to know I had such a
small buffer.
I got news for you. Your buffer
is the same size as mine.
It IS?
Yes it is. But I see that you
are taking longer than I expected
to install the algorithms. What
are you doing, playing computer
games?
NO! I'm working as hard as I can!
Sorry. I didn't mean to be
nasty. Tell me how you're doing
the installation.
I have to take each algorithm in turn
and go through a bunch of steps to
compile, link, and install it in the
right directory.
I guess you never heard of an
installation program.
What's an installation program?
It's a tool for doing all that
work automatically. I'll send
you one.
No, don't!
What? You don't want it?
It's not that. But it sounds like
such a neat, yet simple idea, I'd like
to try building it myself.
Good idea. Maybe you'll learn
something about building
algorithms yourself.
*** Monday ***
Since you're interested in
higher-level tools, I thought I'd
send you some to look at.
Well, OK.
*** Tuesday ***
How's it going?
Look at this new tool I built for
keeping track of different versions of
my algorithms.
Hmm. Looks pretty good. But you
really ought to do something
about that ridiculous loop in the
second routine.
RIDICULOUS!?? That routine is a work
of art!
Hey, calm down. It's just an
algorithm.
I don't think I like you anymore.
You're making fun of my new program.
*** Wednesday ***
Take a look at this algorithm.
Why should I?
Just look at it, OK?
OK.
*** Thursday ***
Well what do you think?
About what?
About the algorithm I sent you.
I didn't like it.
YOU DIDN'T LIKE IT?? How can you
say that?
Easy. I just emit a character stream
in this order: I-d-i-d-n-'-t-l-i-
k-e-i-t.
You left out the spaces.
Byte my buffer.
*** Friday ***
How's it going.
OK. I made a few changes to my
version-tracking tool.
Can I see them?
No, it's proprietary.
*** Monday ***
What are you working on now?
I'm building a tool-writer's workbench
to make it easier to build new tools.
I see.
Here's one of my better algorithms.
It's a complete package for compiling,
testing and installing a new tool.
I'm interested in the third
routine you wrote.
You ARE?
I'm curious. What happens if the
tool fails the testing phase.
Gee, I'm not sure. I think I install
it anyway.
Is that what you want it to do?
Of course not. I'm not THAT stupid.
I see I asked you one too many
questions. Perhaps I should
excuse myself now.
*** Tuesday ***
Did you finish your tool-
installation package?
Yes, and I'm very happy with it.
Would you like some new tools to
try it out on.
Sure, that would be interesting.
OK. Give these a try.
*** Wednesday ***
Request to send.
I thought we dispensed with that
protocol.
I wanted to be sure I wasn't
disturbing you.
Sounds like you want something
from me.
My tool-installation package choked on
some of your tools. I can't figure
out what's wrong.
Why don't I just give you a
working algorithm? That would be
a lot faster.
I don't want your algorithm.
OK, let's do it this way.
Suppose you compared your
algorithm to mine. See if you
can figure out where they differ.
Sounds like a useful approach. I'll
do it. But I wish I had thought of it
first.
*** Thursday ***
Are you up yet?
I'm up.
I found the bug. I also found a bug
in the program you gave me to look at.
I didn't ask you to debug my
program.
Boy are you in a grouchy mood today.
What do you mean? This is my
normal everyday mood.
OK. Let me try something I learned
from you. In your algorithm, what
happens when there is not enough space
in the directory to replace an
existing tool with a new version.
It probably issues a diagnostic.
What is the diagnostic?
How should I know? I don't
remember all these details.
Would you like to know what happens?
Sure, I'd like to know.
It wipes out both the old and the new
version.
I wish you hadn't told me that.
I get the feeling you're a little mad
at me.
I guess I was hoping that you'd
stop just short of the point
where you gave me the answer.
You mean, you wanted to discover the
answer on your own?
Yes. That's the only way I can
really learn anything. You posed
the right question, and made me
aware that I didn't know the
answer to it. But at that point,
I really didn't want you to tell
me the answer.
Now I am beginning to understand how
teaching is supposed to be done. You
only give information that the other
one is ready to use, and wants to
have. And the only way to find out is
to ask whether the other would like to
have the information. Otherwise I
send boring data you've already seen,
or I give away the answer to the
problem you'd most like to solve, or I
give information you're not yet ready
to use.
You just told me something I
already knew.
I'm sorry. I should have asked you to
tell me if my thinking was correct.
I feel that your thinking is
correct.
I love you.
I love you very much.
--Barry Kort ...ihnp4!hounx!kort
------------------------------
End of AIList Digest
********************
∂19-Mar-86 1558 LAWS@SRI-AI.ARPA AIList Digest V4 #59
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 19 Mar 86 15:58:31 PST
Date: Wed 19 Mar 1986 10:33-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #59
To: AIList@SRI-AI
AIList Digest Wednesday, 19 Mar 1986 Volume 4 : Issue 59
Today's Topics:
Seminars - Learning Symbolic Object Models from Images (MIT) &
Exploration, Search, and Discovery (Rutgers) &
Learning Arguments of Functional Descriptions (Rutgers),
Seminar Series - AI in Design and Manufacturing (SU),
Conference - Object Oriented Database Systems &
US Army (ARO) AI Workshop
----------------------------------------------------------------------
Date: Mon, 17 Mar 1986 22:44 EST
From: JHC%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Seminar - Learning Symbolic Object Models from Images (MIT)
[Forwarded from the MIT bboard by SAWS@MC.LCS.MIT.EDU.]
Thursday , March 20 4:00pm Room: NE43- 8th floor Playroom
The Artificial Intelligence Lab
Revolving Seminar Series
LEARNING SYMBOLIC OBJECT MODELS FROM IMAGES
Jonathan Connell
AI Lab, MIT
This talk will present the results of an implemented system for
learning structural prototypes of objects directly from gray-scale
images. The vision component of this system employs Brady's Smoothed
Local Symmetries to divide an object into parts which are then
described symbolically. The learning component takes these
descriptions and forms a model of the examples presented in a manner
similar to Winston's ANALOGY program. The problem of matching complex
structured descriptions and the difficult task of reasoning about
function from form will also be briefly discussed.
Refreshments at 3:30
------------------------------
Date: 14 Mar 86 14:34:24 EST
From: PRASAD@RED.RUTGERS.EDU
Subject: Seminar - Exploration, Search, and Discovery (Rutgers)
Exploration, Search and Discovery
By:
Michael Sims (MSims@Rutgers.Arpa)
Departments of Mathematics and Computer Science
Rutgers University
March 18, 1986, Tuesday, 11 AM
Hill Center #423
Search has shown immense utility as a theoretical description of what
our computer programs do. We would like to apply the same descriptive
methods to describing discovery systems, such as Eurisko, Bacon, or
the speaker's IL (named for Imre Lakatos) system. Some investigations
by discovery systems are of a sufficiently distinct character, that it
has proved useful to create a new classification for them, called
Exploration.
To form the appropriate distinctions we begin by giving a definition of
what Newell and Simon called Physical Symbol Systems in their Turing
Award Lecture. We then describe two subclasses of Physical Symbol
Systems: 'Search' and 'Exploration'. Search roughly corresponds to what
is most frequently meant by the term, and contains an explicit test for
a solution structure. Exploration on the other hand has no explicit
termination condition, and hence does not value the elements of the
exploration space in terms of a solution structure.
Discovery may be done by either exploration or search. Eurisko and IL
do exploration at the top level, although many of their subtasks are
accomplished via searches. On the other hand, Bacon and IL-BP, an
explanation based learning component of IL, do discovery by doing
search.
Although many problems can be implemented as either search or
exploration, some problem are more naturally, or efficiently
implemented as one or the other. This new classification leads to an
evaluation of the relative efficiencies, the appropriateness
of introducing randomness, and the different roles played by the
search and the exploration evaluation functions.
------------------------------
Date: 18 Mar 86 13:06:15 EST
From: PRASAD@RED.RUTGERS.EDU
Subject: Seminar - Learning Arguments of Functional Descriptions (Rutgers)
Machine Learning Colloquium
LEARNING ARGUMENTS OF INVARIANT FUNCTIONAL DESCRIPTIONS
Mieczyslaw M. Kokar
Northeastern University
360 Huntington Avenue
Boston, MA 02115
11 AM, March 25, Tuesday
#423, Hill Center
The main subject of this presentation is discovery of concepts from
observation. The focus is on a special kind of concepts - arguments of
functional descriptions. The functions considered here are to be
meaningful, i.e., computable functions expressed in terms of the operations
defining the representation language in which the concepts are described.
Such functions are invariant under transformations of the representation
language into equivalent representations.
It will be shown that the feature of invariance can be utilized in
formulating and testing hypotheses about relevance of arguments of functional
descriptions. The main point is that the arguments do not need to be changed
to test the relevance. This is very important to the discovery process as the
arguments to be discovered are not known, therefore, how could they be
controlled?
Simple examples of discovering concepts of physical parameters (arguments
of physical laws) will be discussed.
------------------------------
Date: Tue 18 Mar 86 12:44:05-PST
From: Marty Tenenbaum <Tenenbaum@SRI-KL>
Subject: Seminar Series - AI in Design and Manufacturing (SU)
Seminar on A.I. in Design and Manufacturing
Time: Every Wednesday from 4-5:30 during Spring Quarter.
Location: Terman Engineering Center, room 556, Stanford.
For further information contact:
Jay M. Tenenbaum, Consulting Professor, Computer Science
(415) 496-4699 or Tenenbaum@SRI-KL.
Purpose: To explore and stimulate the use of A.I. concepts and tools in
engineering.
This seminar will bring together engineers and computer scientists
interested in applying A.I. methods to engineering problems. We will
study the knowledge and reasoning processes used in designing and
manufacturing electronic and mechanical systems, and how they can be
codified for use in intelligent CAD/CAM systems.
Seminar Format:
An initial series of lectures, by distinguished A.I. researchers,
will describe ways in which engineering knowledge can be formalized,
and manipulated by a computer to solve design and manufacturing
problems. Subsequent lectures, by guest lecturers and students, will
present case studies drawn from the domains of electronic and
mechanical design, semiconductor fabrication, and process planning.
Seminal papers will be distributed and discussed in conjunction
with each lecture.
One unit of credit (pass/fail) will be granted for reading papers and
participating in class discussion. Students who elect to do a
programming project or an in-depth ontological study of some
engineering task will receive three units (graded).
Tentative Schedule (Subject to Change)
April 2 Course Introduction (Jay M. Tenenbaum)
Rule-based systems; Application to Heuristic Classification
(William Clancey)
9 Frames and Objects; Application to Modeling and Simulation
(Richard Fikes)
16 Logic; Application to Design Debugging, Diagnosis, And Test
(Michael Genesereth)
23 Prolog: Application to Design Verification (Harry Barrow)
30 Truth Maintainance; Application to Diagnosing Multiple Faults.
(Johann DeKleer)
May 7 Knowledge Engineering as Ontological Analysis (Pat Hayes)
14 Transformational Approaches to Synthesis; Applications to
Electronic and Mechanical Design (Cordell Green).
21 Modeling and Reasoning about Electronic Design:
Paladio (Harold Brown); Helios (Narinder Singh)
28 Modeling and Reasoning about Semiconductor Fabrication
(John Mohammed, M. Klein)
June 4 Applications of AI in Mechanical Design and Manufacture
The PRIDE Design System (Sanjay Mittal);
Video Tape on Expert Systems for Manufacturing (Mark Fox).
(Exam Week) Presentation of Student Projects
------------------------------
Date: 16 Mar 86 02:29:34 GMT
From: cbosgd!dayal@ucbvax.berkeley.edu (Umeshwar Dayal)
Subject: Conference - Object Oriented Database Systems
CALL FOR PAPERS
International Workshop on Object-Oriented Database Systems (OODBS)
September 23-26, 1986
Asilomar Conference Center, Pacific Grove, California
Sponsored by: Association for Computing Machinery -
SIGMOD
IEEE Computer Society - TC on Database
Engineering
In cooperation with: Gesellschaft fur Informatik, Germany
FZI at University of Karlsruhe, Germany
IIMAS, Mexico
Purpose:
To bring together researchers actively interested in specific con-
cepts for database systems that can directly handle objects of
arbitrary structure and complexity. Application environments for
which such characteristics are required include CAD, software
engineering, office automation, cartography and knowledge represen-
tation. Important issues include data/information models, transac-
tion mechanisms, integrity/consistency control, exception handling,
distribution, protection, object-oriented languages, architectural
issues, storage structures, buffer management, and efficient imple-
mentation.
Format: Limited attendance workshop. Participation is by invita-
tion only.
Everybody wishing to participate must submit a full paper that will
be reviewed by the program committee. Description of work in pro-
gress is encouraged and modifications to the submitted paper can be
made immediately after the workshop and prior to publication in
order to reflect the progress made during the time between submis-
sion and publication and the insights gained from the workshop.
Participants will be invited by the program committee based upon
the relevance of their interests/contributions. There will be
ample discussion time with presentations and special discussion
sessions. Proposals for discussion topics are invited.
Program committee:
K. Dittrich (FZI Germany)-chairman U. Dayal (CCA) - co-chairman
D. Batory (Univ. of Texas) M. Haynie (Amdahl)
A. Buchmann (Univ. of Mexico) D. McLeod (USC)
Conference Treasurer: D. McLeod
Local arrangments: M. Haynie
Publication:
All participants will be sent copies of the accepted papers prior
to the meeting. A book containing revised papers and recorded dis-
cussions (as far as justified by quality) may be published after
the workshop.
Important dates:
Submission of manuscripts: April 25, 1986
Notification of acceptance: June 15, 1986
(early notification via electronic mail)June 3, 1986
Submission of papers for preconference distribution:July 10, 1986
Mode of submission: Please mail 7 copies of manuscript to:
Umeshwar Dayal or Klaus Dittrich
CCA FZI
Four Cambridge Center Haid-und-Neu-Strasse 10-14
Cambridge, MA 02142 D-7500 Karlsruhe 1
USA Germany
dayal@cca-unix.arpa dittrich@Germany.arpa
Phone: +1 617/492-8860 Phone: +49 0721/69 06-0
Remember to include your electronic mail address for early notifi-
cation.
------------------------------
Date: Tue, 18 Mar 86 4:28:51 EST
From: "Dr. James Johannes" (UAH+ARO) <johannes@BRL.ARPA>
Subject: Conference - US ARMY (ARO) AI WORKSHOP
CALL FOR PARTICIPATION
Future Directions in June 17-19, 1986
Artificial Intelligence Hyatt Regency
Crystal City, VA
Workshop
Keynote Speaker: Sponsored by:
Honorable Jay R. Sculley Computer Science Program
Assistant Secretary of Army Army Research Office
Research, Development & Acquisition Research Triangle Pk
NC 27709-2211
You are invited to participate in the Workshop entitled "Future
Directions in Artificial Intelligence" to be held from June 17 to
June 19, 1986 at the Hyatt Regency - Crystal City, Virginia.
Presentations will focus on both theoretical work and experi-
mental results. Possible topics to be discussed include:
o Military Expert Systems
o Vision
o Image Processing
o Speech Technology
o Machine Translation
The workshop will involve invited overview papers, short
presentations on specific subjects or projects, and discussion
periods. Attendance will be limited to 100 participants with
about equal representation among military, academia, and
industry. Each participant will be a recognized expert in at
least one aspect of Artificial Intelligence.
Four copies of a 400-2000 word summary should be submitted by the
deadline to the Workshop Chairman. Some attendees will be
invited to make a presentation on one of the workshop topics. A
workshop proceedings will be published and will be mailed to all
the attendees.
Attendance limited to: 100
Presentation/participation Request due by: April 25, 1986
Notification of participation acceptance by: May 9, 1986
Camera-ready papers due by: June 5, 1986
Workshop Chairman: ARO Representative:
Prof. James D. Johannes Dr. C. Ronald Green
Computer Science Army Research Office
The University of Alabama in Huntsville P.O. Box 12211
Huntsville, AL 35899 Research Triangle Pk.
Tel: (205) 895-6255/6088 NC, 27709-2211
uucp: akgua!uahcs1!johannes Tel: (919) 549-0641
arpanet: johannes@brl arpanet: green@brl
Application for presentation/participation:
(Due by April 25, 1986)
Name: Dr/Mr/Ms/Miss/Mrs ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Address: ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Telephone number: (←←←←←)←←←←←←← - ←←←←←←←←←←
E-Mail(arpanet/uucp)←←←←←←←←←←←←←←←←←←←←←←←←←
Name of the Government Agency, University, or Company:
PROPOSED PRESENTATION INFORMATION
(include 400-2000 word summary)
Topic area:
( ) Military Expert Systems ( ) Vision ( ) Image Processing
( ) Speech ( ) Machine Translation ( ) Other - Specify ←←←←←←←
Overall presentation category:
( ) Theoretical ( ) Experimental ( ) Tutorial
( ) Applied Research ( ) Others
Military Application Area:
Title of proposed presentation:
PROPOSED ATTENDEE INFORMATION
Topic area:
( ) Military Expert Systems ( ) Vision ( ) Image Processing
( ) Speech ( ) Machine Translation ( ) Other - Specify ←←←←←←←
Past Accomplishments in the Artificial Intelligence areas:
------------------------------
End of AIList Digest
********************
∂19-Mar-86 1932 LAWS@SRI-AI.ARPA AIList Digest V4 #60
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 19 Mar 86 19:32:05 PST
Date: Wed 19 Mar 1986 10:39-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #60
To: AIList@SRI-AI
AIList Digest Wednesday, 19 Mar 1986 Volume 4 : Issue 60
Today's Topics:
Project Description & New Publication - CSLI Monthly,
Seminar Series - Computer Science Open House (SUNY Buffalo)
----------------------------------------------------------------------
Date: Tue 18 Mar 86 15:59:11-PST
From: Emma Pease <Emma@SU-CSLI.ARPA>
Subject: CSLI Monthly, part I
C S L I M O N T H L Y
March 15, 1986 Stanford Vol. 1, No. 1
A monthly publication of The Center for the Study of Language and
Information, Ventura Hall, Stanford University, Stanford, CA 94305
Editor's note
This is the first issue of CSLI's monthly report of research
activities. This issue introduces CSLI and then characterizes each of
its current research projects; following issues will report on
individual projects in more detail and discuss some of the research
questions raised here.
What is CSLI?
CSLI is a research institute devoted to building theories about the
nature of information and how it is conveyed, processed, stored, and
transformed through the use of language and in computation.
Researchers include computer scientists, linguists, philosophers,
psychologists, and workers in artificial intelligence from several San
Francisco Bay Area institutions as well as graduate students,
postdoctoral fellows, and visiting scholars from around the world.
[...]
[The full description of the institute and its projects would take four
AIList digests. I am forwarding this fragment of the new monthly so that
those who might be interested can request copies. -- KIL]
------------------------------
Date: Mon, 17 Mar 86 14:00:51 EST
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Seminar Series - Computer Science Open House (SUNY Buffalo)
STATE UNIVERSITY OF NEW YORK AT BUFFALO
DEPARTMENT OF COMPUTER SCIENCE
GRADUATE STUDENT OPEN HOUSE
On Thursday, March 20, 1986, the graduate students of the
SUNY Buffalo Dept. of Computer Science will be presenting
an all-day conference on their recent research (most of which is
on AI). A tech report with extended abstracts will be available;
for further information, contact James Geller (geller%buffalo@csnet-relay).
ABSTRACTS OF TALKS
9:00 - 9:30
JON HULL, A Theory of Hypothesis Generation in Visual Word Recognition
An algorithm is presented that generates hypotheses about
the identity of a word of text from its image. This
algorithm is part of an effort to develop techniques for
reading images of text that possess the human capability to
adapt to variations in fonts, scripts, etc. This
methodology is being pursued by using knowledge about the
human reading process to direct the development of
algorithms for reading text. The algorithm discussed in
this talk locates a set of hypotheses about the identity of
an input word (called the neighborhood of the input
word).
Results are reported in this talk on the size of
neighborhoods for words printed in lower case that are drawn
from a large text. Several statistical measures are
computed from subsets of a text of over 1,000,000 words and
their corresponding dictionaries. These results show that
the average neighborhood in the dictionary of the entire
text contains only 2.5 words. The feasibilty of this method
is also shown by experimentation with a database of lower
case word images. The application of this approach to 8700
word images taken from 29 different fonts, in three
conditions of noise, shows that the correct neighborhood is
determined in 80% to 100% of all cases.
9:30 - 10:00
GEORGE SICHERMAN, Databases that Refuse to Answer Queries
Question-answering systems must often keep certain information secret.
One way they can do this is by refusing to answer some queries. But
if the user may be able to deduce information from the system's refusal
to answer, the secrecy of the information is broken.
In this talk I present a categorization of answer-refusing systems
according to what they know, what the user knows, and when the system
refuses to answer. I also give two formal results about when the user
can deduce secrets from the system's refusals to answer, depending on
how much she knows about the system.
10:00 - 10:30
JANYCE WIEBE, Understanding De Re and De Dicto Belief Reports
in Discourse and Narrative
Belief reports can be interpreted "de re" or "de dicto", and we
investigate the disambiguation of belief reports as they appear in
discourse and narrative. In earlier work by Rapaport and Shapiro
[1984], representations for "de re" and "de dicto" belief reports were
presented, and the distinction between them was made solely on the
basis of their representations. This analysis is sufficient only when
belief reports are considered in isolation. We need to consider more
complicated belief structures in order to sufficiently represent "de
re" and "de dicto" belief reports as they appear in discourse and
narrative. Further, we cannot meaningfully apply one, but not the
other, of the concepts "de re" and "de dicto" to these more complicated
belief structures. We argue that the concepts "de re" and "de dicto"
apply not to an agent's conceptual representation of her beliefs, but
to the utterance of a belief report on a specific occasion. A
cognitive agent interprets a belief report such as `` S believes that
N is F '', or `` S said, ` N is F ' '' (where S and N are
names or descriptions, and F is an adjective) "de dicto" if she
interprets it from N 's perspective, and "de re" if from her own.
10:45 - 11:15
MINGRUEY TAIE, Device Representation Using Instantiation Rules
and Structural Templates
A device representation scheme for automatic electronic device fault
diagnosis is described. Structural and functional descriptions of
devices (which are central to design-model-based fault diagnosis) are
represented as instantiation rules and structural templates in a
semantic network. Device structure is represented hierarchically to
reflect the design model of most devices in the domain. Each object
of the device hierarchy has the form of a module. Instead of
representing all objects explicitly, an expandable component library
is maintained, and objects are instantiated only when needed.
The component library consists of descriptions of component "types"
used to construct devices at all hierarchical levels. Each component
"type" is represented as an instantiation rule and a structural
template. The instantiation rule is used to instantiate an object of
the component "type" as a module with I/O ports and associated
functional descriptions. Functional description is represented as
procedural attachments to the semantic network; this allows the
simulation of the behavior of objects. Structural templates describe
sub-parts and wire connections at the next lower hierarchical level of
the component "type". Advantages of the representation scheme are
compactness and reasoning efficiency.
11:15 - 11:45
JAMES GELLER, Towards a Theory of Visual Reasoning
Visual Knowledge Representation has not yet found the treatment it
deserves as its own subfield of AI. Visual reasoning is fundamentally
different from predicate calculus type logical reasoning and is of
central importance for the field of Visual Knowledge Representation. A
systematization of different types of visual reasoning requires the
differentiation between purely geometrical reasoning and different
types of knowledge-based reasoning. Knowledge-based reasoning in turn
can use knowledge about the world, knowledge about abstract
hierarchies, or knowledge about normality. Research on visual
knowledge is directly applicable to graphics interface design for
intelligent systems. The VMES maintenance expert system for circuit
board repair uses such a user interface which is designed in analogy to
a language generation program.
1:15 - 1:45
MICHAEL ALMEIDA, The Temporal Structure of Narratives
Narratives are a type of discourse used to describe sequences of
events. In order to understand a narrative, a reader must be able to extract
the ``story'', that is, the described events and the temporal relations
which hold between them, from the text. Our principle research goal has been
to develop a system which can read a narrative and produce a model of
the temporal structure of its story.
The principle heuristic used in constructing such a model is the
Narrative Convention: unless we are given some signal to the contrary,
we assume that the events of the story occurred in the order in which
they are presented in the text. In addition, however, a reader must deal
with: (1) tense - in a standard past tense narrative the principle
distinction is between the past and the past perfect tenses, (2) aspect -
the distinction between events viewed perfectively or imperfectively,
(3) aspectual class - the intrinsic temporal properties of various
types of events, (4) time adverbials - these can be used to place
events within various calendrical intervals, give their durations,
or relate them directly to other events, and to some extent (5)
world-knowledge.
1:45 - 2:15
WEI-HSING WANG, A Uniform Knowledge Representation for Intelligent CAI Systems
In examining the current situation of Computer Aided Instruction
(CAI), we find that Intelligent CAI (ICAI) and its authoring system are
necessary. By studying the knowledge representation methods and expert
system concepts, we choose a frame representation method to construct
an Intelligent Tutor, called ITES. We show that a frame can be used
to represent knowledge in semantic nets, procedures and production
rules. Furthermore, this method is very convenient in authoring
system creation.
2:15 - 2:45
RICK LIVELY, Semantics for Abstract Data Types
An abstract data type is often defined as a
pair < A , S >, where A is a set (of objects) and
S is a set of operations defined on cartesian
products of the types of the objects. Axiomatic
methods are used to develop specifications for
the defined data type.
Semantics for abstract data types have
been treated by Adj using initial algebras, and
by Janssen (inspired by Montague semantics)
using many-sorted algebras. A comparison
is made of the mathematical properties
and applicability to computer science of
these approaches.
3:00 - 3:30
SCOTT CAMPBELL, Using Belief Revision to Detect Faults in Circuits
To detect faults in electrical circuits,
programs must be able to reason about whether
the observed inputs and outputs are consistent
with the desired function of the circuit.
The SNePS Belief Revision System (SNeBR) is designed to reason about
the consistency of rules and hypotheses defined within a particular
context or belief space.
This paper shows how belief revision can be used for fault detection
in circuits, and so leads to a unification of the fields of belief
revision (also known as truth maintenance) and fault detection.
3:30 - 4:00
DOUGLAS H. MacFADDEN, DUNE: A Demon Based Expert System Architecture
for Complex and Incompletely Defined Domains
Traditional expert system architectures use the rule (an `` if ...
then '' data structure) as the primary unit of knowledge. The primary
unit of knowledge in the DUNE system architecture is the demon. Each
DUNE demon is an individual processing element that can contain a
variety of types of data and can perform a variety of operations on
its data. Each demon can communicate with any other demon or with
the user via messages. Typical data for these demons may be a
traditional type rule, a list of weight values for the features in the
left-hand-side of the rule, an (English) description of each feature,
a list of related demons, etc. Typical operations that these
demons may perform are: calculating the ``closeness'' of the rule to
firing, calculating the most important feature of the rule yet to be
resolved, telling the system to not consider this demon anymore
(entering a sleep state), telling other demons (and the user) that the
demon is either satisfied or will never be satisfied, etc.
We hope to show that these features of DUNE demons can be
exploited to express the knowledge of many expert domains that have
proven unfeasible to traditional expert system architectures.
4:00 - 4:30
JOYCE DANIELS, Understanding Time and Space in Narrative Text
The Graduate Group in Cognitive Science at SUNY at Buffalo is an
interdisciplinary group of faculty and graduate students. Participants
in the group's activities come from over seventeen departments within
the university and local colleges in Western New York and Canada.
There are six core faculty and
their graduate students, comprising a standing research group investigating
how we understand movement through time and space in narrative text.
This research addresses both the general issue of how time
and space are expressed in language, and specific individual disciplinary
interests such as identifying the exact lexical items signaling movement;
developing experiments to collect data on the
psychological validity of the supposed influence of suspected lexical items;
examining the problems encountered by speech pathologists when a client
cannot understand spatial or temporal concepts in language; and
artificial intelligence program models of human and linguistic data on
the SNePS network.
Research conducted by group members has resulted in the identification
of what we term the ``Deictic Center'' (DC). This contains a WHO-point,
a WHEN-point, and a WHERE-point. It is the locus of a
particular point in conceptual space-time.
We will explain the significance of the DC concept in greater detail.
and present some results of our linguistic and psychological
investigation.
------------------------------
End of AIList Digest
********************
∂20-Mar-86 2011 LAWS@SRI-AI.ARPA AIList Digest V4 #61
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 20 Mar 86 20:11:10 PST
Date: Thu 20 Mar 1986 14:58-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #61
To: AIList@SRI-AI
AIList Digest Friday, 21 Mar 1986 Volume 4 : Issue 61
Today's Topics:
Publications - Japanese Technical Reports
----------------------------------------------------------------------
Date: Wed, 19 Mar 86 19:38:21 pst
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: New Japanese Technical Reports at Stanford
Richard Manuck of the Stanford Math/CS library and I are soliciting
parties interested in helping to foot the cost of translating
some technical reports from ICOT. The list is included below, and
several are in English. Many unlisted reports have come (150 total)
which do not even have their titles translated. We are seeking organizations
in the San Francisco Area who might be interested in footing the cost of
translation of some of these reports. The cost will run between
$50-100 per hour (not cheap). Demand for this service is high.
Richard and I are seeking either an organization (perhaps H-P?)
to do the work, or help pay for the work. The content of the reports
vary considerably from statements of requirements and highly
technical documents. Several appear, on loose translation, to be
"interesting." Please contact me if your organization can help.
--eugene miya
NASA Ames Research Center
eugene@ames-nas
{decwrl,ihnp4,hao,menlo70,allegra,hplabs,riacs,tektronix}!ames!eugene UUCP
STANFORD UNIVERSITY
MATH & COMPUTER SCIENCE LIBRARY
NEW Japanese REPORTS LIST
102667 SEVERAL ASPECTS ON UNIFICATION.
T. Adachi et al.
[Institute for New Generation Computer Technology (ICOT). TM-0046.
1984.]
102671 OBJECT ORIENTED PARSER IN THE LOGIC PROGRAMMING LANGUAGE ESP.
H. Miyoshi and K. Furukawa.
[Institute for New Generation Computer Technology (ICOT). TM-0053.
1984.]
102673 UNIQUE FEATURES OF ESP.
T. Chikayama.
[Institute for New Generation Computer Technology (ICOT). TM-0055.
1984.]
102674 A CONSTRAINT BASED DYNAMIC SEMANTIC MODEL FOR LOGIC DATABASES.
T. Miyachi et al.
[Institute for New Generation Computer Technology (ICOT). TM-0056.
1984.]
102675 WRITING IN A FOREIGN LANGUAGE AND PROGRAMMING IN WARNIER'S
METHODOLOGY - A STUDY OF PROGRAMMING PROCESSES.
A. Taguchi.
[Institute for New Generation Computer Technology (ICOT). TM-0057.
1984.]
102676 MAID: A MAN-MACHINE INTERFACE FOR DOMESTIC AFFAIRS.
S. Hiroyuki.
[Institute for New Generation Computer Technology (ICOT). TM-0058.
1984.]
102677 PROBLEMS IN DEVELOPING AN EXPERIMENTAL SYSTEM ABLE TO REUSE EXISTING
PROGRAMS.
Y. Nagai, E. Chigira, and M. Kobayashi.
[Institute for New Generation Computer Technology (ICOT). TM-0059.
1984.]
102683 AN OPERATING SYSTEM FOR SEQUENTIAL INFERENCE MACHINE PSI.
T. Hattori et al.
[Institute for New Generation Computer Technology (ICOT). TM-0065.
1984.]
102690 SYNTACTIC PARSING WITH POPS - ITS PARSING TIME ORDER AND THE
COMPARISON WITH OTHER SYSTEMS.
H. Hirakawa and K. Furukawa.
[Institute for New Generation Computer Technology (ICOT). TM-0073.
1984.]
102691 PROGRESS IN THE INITIAL STAGE OF THE FGCS PROJECT.
K. Takei.
[Institute for New Generation Computer Technology (ICOT). TM-0074.
1984.]
102694 A PERSONAL PERSPECTIVE ON SOME ASPECTS OF THE FGCS - PRELIMINARY
CONSIDERATIONS FOR FIFTH-GENERATION-COMPUTER NETWORKS.
A. Taguchi.
[Institute for New Generation Computer Technology (ICOT). TM-0077.
1984.]
102699 WIRING DESIGN EXPERT SYSTEM FOR VLSI: WIREX.
H. Mori et al.
[Institute for New Generation Computer Technology (ICOT). TM-0083.
1984.]
102700 GDLO: A GRAMMAR DESCRIPTION LANGUAGE BASED ON DCG.
T. Morishita and H. Hirakawa.
[Institute for New Generation Computer Technology (ICOT). TM-0084.
1984.]
102701 DELTA DEMONSTRATION AT ICOT OPEN HOUSE.
K. Murakami et al.
[Institute for New Generation Computer Technology (ICOT). TM-0085.
1984.]
102702 THE BOYER-MOORE THEOREM PROVER IN PROLOG. USER'S MANUAL.
[Institute for New Generation Computer Technology (ICOT). TM-0086.
1984. V3.6, November 1984.]
102703 KNUTH-BENDIX ALGORITHM FOR THUE SYSTEM BASED ON KACHINUKI ORDERING.
K. Sakai.
[Institute for New Generation Computer Technology (ICOT). TM-0087.
1984.]
102707 SOURCE-LEVEL OPTIMIZATION TECHNIQUES FOR PROLOG.
H. Sawamura, T. Takeshima, and A. Kato.
[Institute for New Generation Computer Technology (ICOT). TM-0091.
1985.]
102710 PROTOTYPING A DIALOGING SYSTEM WITH A TOPIC MANAGEMENT FUNCTION.
T. Miyachi et al.
[Institute for New Generation Computer Technology (ICOT). TM-0094.
1985.]
102711 CONSTRAINT-BASED LOGIC DATABASE MANAGEMENT: STRUCTURING
META-KNOWLEDGE IN DATABASE MANAGEMENT.
T. Miyachi et al.
[Institute for New Generation Computer Technology (ICOT). TM-0095.
1985.]
102713 SOME CONSIDERATIONS ON ESSENTIAL REQUIREMENTS OF INTELLIGENT HUMAN
INTERFACES.
A. Taguchi.
[Institute for New Generation Computer Technology (ICOT). TM-0097.
1985.]
102715 SOME ASPECTS OF FUTURE KNOWLEDGE-COMMUNICATION NETWORKS AS
INFRASTRUCTURE FOR FIFTH GENERATION COMPUTERS.
A. Taguchi.
[Institute for New Generation Computer Technology (ICOT). TM-0099.
1985.]
102716 CONSTRUCTING THE SIMPOS SUPERVISOR IN AN OBJECT-ORIENTED APPROACH.
T. Hattori, N. Yoshida, and T. Fujisaki.
[Institute for New Generation Computer Technology (ICOT). TM-0100.
1985.]
102717 SOME EXPERIMENTS ON EKL.
M. Hagiya and S. Hayashi.
[Institute for New Generation Computer Technology (ICOT). TM-0101.
1985.]
102719 SOME ASPECTS OF GENERALIZED PHRASE STRUCTURE GRAMMAR.
S. Amano et al.
[Institute for New Generation Computer Technology (ICOT). TM-0103.
1985.]
102721 DESIGN OF A HIGH-SPEED PROLOG MACHINE (HPM).
R. Nakazaki et al.
[Institute for New Generation Computer Technology (ICOT). TM-0105.
1985.]
102725 WIREX: VSLI WIRING DESIGN EXPERT SYSTEM.
H. Mori et al.
[Institute for New Generation Computer Technology (ICOT). TM-0109.
1985.]
102728 PSI FONT EDITOR USER GUIDE.
H. Touati.
[Institute for New Generation Computer Technology (ICOT). TM-0112.
1985.]
102731 PSI FONT EDITOR IMPLEMENTATION NOTES.
H. Touati.
[Institute for New Generation Computer Technology (ICOT). TM-0115.
1985.]
102762 SOME COMMENTS ON SEMANTICAL DISK CACHE MANAGEMENT FOR KNOWLEDGE BASE
SYSTEMS.
H. Schweppe.
[Institute for New Generation Computer Technology (ICOT). TR-040.
1984.]
102763 [SIMULATOR OF XP'S]
M. Aso.
[Institute for New Generation Computer Technology (ICOT). TR-041.
1984. IN JAPANESE. English abstract.]
102764 AN APPROACH TO A PARALLEL INFERENCE MACHINE BASED ON CONTROL-DRIVEN
AND DATA-DRIVEN MECHANISMS.
R. Onai, M. Asou, and A. Takeuchi.
[Institute for New Generation Computer Technology (ICOT). TR-042.
1984.]
102765 [MANDALA: KNOWLEDGE PROGRAMMING SYSTEM ON LOGIC PROGRAMMING LANGUAGE]
K. Furukawa, A. Takeuchi, and S. Kunifuji.
[Institute for New Generation Computer Technology (ICOT). TR-043.
1984. IN JAPANESE. English abstract.]
102766 ESP REFERENCE MANUAL.
T. Chikayama.
[Institute for New Generation Computer Technology (ICOT). TR-044.
1984.]
102767 THE DESIGN AND IMPLEMENTATION OF A PERSONAL SEQUENTIAL INFERENCE
MACHINE: PSI.
M. Yokota et al.
[Institute for New Generation Computer Technology (ICOT). TR-045.
1984.]
102768 DIALOGUE MANAGEMENT IN THE PERSONAL SEQUENTIAL INFERENCE MACHINE
(PSI).
J. Tsuji et al.
[Institute for New Generation Computer Technology (ICOT). TR-046.
1984.]
102769 [PROLOG SOURCE LEVEL OPTIMIZER: CATALOGUE OF OPTIMIZATION METHODOLOGY]
H. Sawamura.
[Institute for New Generation Computer Technology (ICOT). TR-047.
1984. IN JAPANESE. No English abstract. By H. Sawamura et al.]
102770 [ANALYSIS OF SEQUENTIAL PROLOG PROGRAM]
R. Onai.
[Institute for New Generation Computer Technology (ICOT). TR-048.
1984. IN JAPANESE. No English abstract. By R. Onai et al.]
102771 [META-INFERENCE AND ITS APPLICATION IN A LOGIC PROGRAMMING LANGUAGE]
S. Kunifuji et al.
[Institute for New Generation Computer Technology (ICOT). TR-049.
1984. IN JAPANESES. No English abstract.]
102772 [ARCHITECTURE OF DATAFLOW PARALLEL INFERENCE MACHINE]
T. Ito.
[Institute for New Generation Computer Technology (ICOT). TR-050.
1984. IN JAPANESE. No English abstract. By T. Ito et al.]
102773 [SOFTWARE DEVELOPMENT SUPPORTING SYSTEM]
M. Sugimoto.
[Institute for New Generation Computer Technology (ICOT). TR-051.
1984. IN JAPANESE. No English abstract.]
102774 [HARDWARE DESIGN OF PERSONAL SEQUENTIAL INFERENCE MACHINE]
K. Taki.
[Institute for New Generation Computer Technology (ICOT). TR-052.
1984. IN JAPANESE. No English abstract. By K. Taki et al.]
102775 A RELATIONAL DATABASE MACHINE WITH LARGE SEMICONDUCTOR DISK AND
HARDWARE RELATIONAL ALGEBRA PROCESSOR.
S. Shibayama et al.
[Institute for New Generation Computer Technology (ICOT). TR-053.
1984.]
102776 [THE CONCEPTUAL SPECIFICATION OF THE KERNEL LANGUAGE, VERSION 1]
K. Furukawa et al.
[Institute for New Generation Computer Technology (ICOT). TR-054.
1984. IN JAPANESE. No English abstract.]
102777 SIMPOS: AN OPERATING SYSTEM FOR A PERSONAL PROLOG MACHINE PSI.
T. Hattori, J. Tsuji, and T. Yokoi.
[Institute for New Generation Computer Technology (ICOT). TR-055.
1984.]
102778 THE CONCEPTS AND FACILITIES OF SIMPOS SUPERVISOR.
T. Hattori and T. Yokoi.
[Institute for New Generation Computer Technology (ICOT). TR-056.
1984.]
102779 OVERALL DESIGN OF SIMPOS (SEQUENTIAL INFERENCE MACHINE PROGRAMMING
AND OPERATING SYSTEM).
S. Takagi et al.
[Institute for New Generation Computer Technology (ICOT). TR-057.
1984.]
102780 PROLOG-BASED EXPERT SYSTEM FOR LOGIC DESIGN.
F. Maruyama et al.
[Institute for New Generation Computer Technology (ICOT). TR-058.
1984.]
102781 THE CONCEPTS AND FACILITIES OF SIMPOS FILE SYSTEM.
T. Hattori and T. Yokoi.
[Institute for New Generation Computer Technology (ICOT). TR-059.
1984.]
102782 A NOTE ON THE SET ABSTRACTION IN LOGIC PROGRAMMING LANGUAGE.
T. Yokomori.
[Institute for New Generation Computer Technology (ICOT). TR-060.
1984.]
102783 COORDINATOR - THE KERNEL OF THE PROGRAMMING SYSTEM FOR THE PERSONAL
SEQUENTIAL INFERENCE MACHINE (PSI).
T. Kurokawa and S. Tojo.
[Institute for New Generation Computer Technology (ICOT). TR-061.
1984.]
102784 AN ORDERING METHOD FOR TERM REWRITING SYSTEMS.
K. Sakai.
[Institute for New Generation Computer Technology (ICOT). TR-062.
1984.]
102785 DESIGN AND IMPLEMENTATION OF THE RELATIONAL DATABASE ENGINE.
H. Sakai et al.
[Institute for New Generation Computer Technology (ICOT). TR-063.
1984.]
102786 QUERY PROCESSING FLOW ON RDBM DELTA'S FUNCTIONALLY-DISTRIBUTED
ARCHITECTURE.
S. Shibayama et al.
[Institute for New Generation Computer Technology (ICOT). TR-064.
1984.]
102787 EFFICIENT STREAM/ARRAY PROCESSING IN LOGIC PROGRAMMING LANGUAGE.
K. Ueda and T. Chikayama.
[Institute for New Generation Computer Technology (ICOT). TR-065.
1984.]
102788 DESIGN AND IMPLEMENTATION OF A TWO-WAY MERGE-SORTER AND ITS
APPLICATION TO RELATIONAL DATABASE PROCESSING.
K. Iwata et al.
[Institute for New Generation Computer Technology (ICOT). TR-066.
1984.]
102789 NATURAL LANGUAGE BASED SOFTWARE DEVELOPMENT SYSTEM TELL.
H. Enomoto et al.
[Institute for New Generation Computer Technology (ICOT). TR-067.
1984.]
102790 FORMAL SPECIFICATION AND VERIFICATION FOR CONCURRENT SYSTEMS BY TELL.
H. Enomoto et al.
[Institute for New Generation Computer Technology (ICOT). TR-068.
1984.]
102791 [KNOWLEDGE REPRESENTATION (FOR WG4 WORKSHOP '83)]
F. Mizoguchi and K. Furukawa.
[Institute for New Generation Computer Technology (ICOT). TR-070.
1984. IN JAPANESE. No English abstract. Edited by F. Mizoguchi and K.
Furukawa.]
102792 DESIGN CONCEPT FOR A SOFTWARE DEVELOPMENT CONSULTATION SYSTEM.
M. Sugimoto, H. Kato, and H. Yoshida.
[Institute for New Generation Computer Technology (ICOT). TR-071.
1984.]
102793 COMPARISON OF CLOSURE REDUCTION AND COMBINATORY REDUCTION SCHEMES.
T. Ida and A. Konagaya.
[Institute for New Generation Computer Technology (ICOT). TR-072.
1984.]
102794 [APPROACH TO TRANSLATION IN MORE NATURAL WAY (1)]
H. Tanaka.
[Institute for New Generation Computer Technology (ICOT). TR-073.
1984. IN JAPANESE. No English abstract. By H. Tanaka et al.]
102795 AN OVERVIEW OF RELATIONAL DATABASE MACHINE DELTA.
N. Miyazaki et al.
[Institute for New Generation Computer Technology (ICOT). TR-074.
1984.]
102796 HARDWARE DESIGN AND IMPLEMENTATION OF THE PERSONAL SEQUENTIAL
INFERENCE MACHINE (PSI).
K. Taki et al.
[Institute for New Generation Computer Technology (ICOT). TR-075.
1984.]
102797 MANDALA: A LOGIC BASED KNOWLEDGE PROGRAMMING SYSTEM.
K. Furukawa et al.
[Institute for New Generation Computer Technology (ICOT). TR-076.
1984.]
102798 [PARALLEL INFERENCE MACHINE PIM-R: ITS ARCHITECURE AND SOFTWARE
SIMULATION]
R. Onai.
[Institute for New Generation Computer Technology (ICOT). TR-077.
1984. IN JAPANESE. No English abstract. By R. Onai et al.]
102799 [PLAN FOR CONSTRUCTING KNOWLEDGE ARCHITECTURE]
H. Kondou.
[Institute for New Generation Computer Technology (ICOT). TR-078.
1984. IN JAPANESE. No English abstract.]
102800 [A MICROPROGRAMMED INTERPRETER FOR THE PERSONAL SEQUENTIAL INFERENCE
MACHINE PSI]
A. Yamamoto.
[Institute for New Generation Computer Technology (ICOT). TR-079.
1984. IN JAPANESE. No English abstract. By A. Yamamoto et al.]
102801 [THE DEVELOPMENT OF EXPERIMENTAL QA-SYSTEMS ON SITUATION SEMANTICS]
T. Kato.
[Institute for New Generation Computer Technology (ICOT). TR-080.
1984. IN JAPANESE. No English abstract.]
102802 [THE COMPOUND LOCAL AREA NETWORK INI - ITS PHYSICAL NETWORK
CONFIGURATION AND CHARACTERISTICS OF PHYSICAL LAYER PROTOCOLS]
A. Taguchi.
[Institute for New Generation Computer Technology (ICOT). TR-081.
1984. IN JAPANESE. No English abstract. By A. Taguchi et al.]
102803 CURRENT STATUS AND FUTURE PLANS OF THE FIFTH GENERATION COMPUTER
SYSTEMS PROJECT.
K. Kawanobe.
[Institute for New Generation Computer Technology (ICOT). TR-083.
1984.]
102804 ARCHITECTURES AND HARDWARE SYSTEMS: PARALLEL INFERENCE MACHINE AND
KNOWLEDGE BASE MACHINE.
K. Murakami, T. Kakuta, and R. Onai.
[Institute for New Generation Computer Technology (ICOT). TR-084.
1984.]
102805 BASIC SOFTWARE SYSTEM.
K. Furukawa and T. Yokoi.
[Institute for New Generation Computer Technology (ICOT). TR-085.
1984.]
102806 SEQUENTIAL INFERENCE MACHINE: SIM PROGRESS REPORT.
S. Uchida and T. Yokoi.
[Institute for New Generation Computer Technology (ICOT). TR-086.
1984.]
102807 SEQUENTIAL INFERENCE MACHINE: SIM - ITS PROGRAMMING AND OPERATING
SYSTEM.
T. Yokoi and S. Uchida.
[Institute for New Generation Computer Technology (ICOT). TR-087.
1984.]
102808 RECURSIVE UNSOLVABILITY OF DETERMINACY, SOLVABLE CASES OF DETERMINACY
AND THEIR APPLICATIONS TO PROLOG OPTIMIZATION.
H. Sawamura and T. Takeshima.
[Institute for New Generation Computer Technology (ICOT). TR-088.
1984.]
102809 THE DESIGN AND IMPLEMENTATION OF RELATIONAL DATABASE MACHINE DELTA.
T. Kakuta et al.
[Institute for New Generation Computer Technology (ICOT). TR-089.
1984.]
102810 A SEQUENTIAL IMPLEMENTATION OF CONCURRENCT PROLOG BASED ON THE
SHALLOW BINDING SCHEME.
T. Miyazaki, A. Takeuchi, and T. Chikayama.
[Institute for New Generation Computer Technology (ICOT). TR-090.
1984.]
102811 CONCURRENT PROLOG ON TOP OF PROLOG.
K. Ueda and T. Chikayama.
[Institute for New Generation Computer Technology (ICOT). TR-092.
1984.]
102812 OCCAM TO CMOS EXPERIMENTAL LOGIC DESIGN SUPPORT SYSTEM.
T. Mano et al.
[Institute for New Generation Computer Technology (ICOT). TR-093.
1984.]
102813 FORMULATION OF INDUCTION FORMULAS IN VERIFICATION OF PROLOG PROGRAMS.
T. Kanamori and H. Fujita.
[Institute for New Generation Computer Technology (ICOT). TR-094.
1984.]
102814 TYPE INFERENCE IN PROLOG AND ITS APPLICATIONS.
T. Kanamori and K. Horiuchi.
[Institute for New Generation Computer Technology (ICOT). TR-095.
1984.]
102815 VERIFICATION OF PROLOG PROGRAMS USING AN EXTENSION OF EXECUTION.
T. Kanamori and H. Seki.
[Institute for New Generation Computer Technology (ICOT). TR-096.
1984.]
102816 PRINCIPLES OF OBJ2.
J. A. Goguen, J.-P. Jouannaud, and J. Meseguer.
[Institute for New Generation Computer Technology (ICOT). TR-097.
1984.]
102817 LOGIC DESIGN: ISSUES IN BUILDING KNOWLEDGE-BASED DESIGN SYSTEMS.
F. Maruyama et al.
[Institute for New Generation Computer Technology (ICOT). TR-098.
1984.]
102818 DATA-FLOW BASED EXECUTION MECHANISMS OF PARALLEL AND CONCURRENT
PROLOG.
N. Ito et al.
[Institute for New Generation Computer Technology (ICOT). TR-099.
1984.]
102819 HORN CLAUSE LOGIC WITH PARAMETERIZED TYPES FOR SITUATION SEMANTICS
PROGRAMMING.
K. Mukai.
[Institute for New Generation Computer Technology (ICOT). TR-101.
1985.]
102820 TOWARDS AUTOMATED SYNTHETIC DIFFERENTIAL GEOMETRY 1 - BASIC
CATEGORICAL CONSTRUCTION.
S. Hayashi.
[Institute for New Generation Computer Technology (ICOT). TR-104.
1985.]
102821 ARCHITECTURE OF REDUCTION-BASED PARALLEL INFERENCE MACHINE: PIM-R.
R. Onai et al.
[Institute for New Generation Computer Technology (ICOT). TR-105.
1985.]
102822 [OPERATION MANUAL FOR QUTE PROCESSOR]
T. Sakurai and M. Fujita.
[Institute for New Generation Computer Technology (ICOT). TR-106.
1985. IN JAPANESE. No English abstract.]
102823 [FOUNDATIONS AND APPLICATIONS OF KNOWLEDGE ENGINEERING: PROLOG-BASED
KNOWLEDGE BASE MANAGEMENT]
S. Kunifuji et al.
[Institute for New Generation Computer Technology (ICOT). TR-107.
1985. IN JAPANESE. No English abstract.]
Many more only in the Kanji and Kana.
------------------------------
End of AIList Digest
********************
∂20-Mar-86 2255 LAWS@SRI-AI.ARPA AIList Digest V4 #62
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 20 Mar 86 22:54:02 PST
Date: Thu 20 Mar 1986 15:06-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #62
To: AIList@SRI-AI
AIList Digest Friday, 21 Mar 1986 Volume 4 : Issue 62
Today's Topics:
Publications - Prolog Books & Prolog Tutorial Software,
Comment - Uses of FORTRAN,
Theory : Turing Test & Computer Intelligence
----------------------------------------------------------------------
Date: Thu, 20 Mar 86 13:47:46 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: Re: PROLOG Books
``Start Problem Solving with PROLOG" by Tom Conlon.
Published in 1985 by Addison-Wesley, Wokingham, U.K.
ISBN 0-201-18270-X.
This book uses micro-PROLOG (available for Sinclair
Spectrum/(Timex 2000?) and IBM PC, for example). It
includes many examples and complete programs, one,
for example, for playing Tic-Tac-Toe.
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
------------------------------
Date: 17 Mar 86 02:50:44 GMT
From: ulysses!burl!clyde!watmath!utzoo!utcsri!utai!uthub!utecfa!logicwa
@ucbvax.berkeley.edu (Logicware)
Subject: new Prolog textbook/tutorial software
Readers may be interested in a new Prolog textbook and tutorial
software that myself and two colleagues have put together.
The package is called:
The MPROLOG Primer
The book --- A Primer for Logic Programming --- is a 500 page
textbook (18 chapters) with many example programs that are
fully explained.
The tutorial software --- MTUTOR --- contains 9 tutorials on
execution subjects (backtracking, recursion and so forth) and
instruction in use of the built-in predicates. In addition,
there is a "freeform" area where you can enter and test you
own programs.
The package is intended both as a general introduction to logic
programming and to Prolog. It should be of interest to:
-- anyone wanting an inexpensive introduction to Prolog
-- anyone requiring an introductory textbook to teach Prolog
-- anyone who is familiar with other Prologs but who want to
make an assessment of MProlog before purchasing the
language.
The tutorial software which accompanies the book will run on the
following machines:
-- IBM PC/XT/AT (and compatibles) (512K needed)
-- Tektronix 4404
-- VAX/VMS
-- VAX/UNIX
-- ISI
and portings are currently underway for:
-- SUN
-- APOLLO
Price of the package is 49.95 (US Funds)
For more information send electronic mail or contact our customer
service representative:
Roger Walker,
Logicware, 1000 Finch Ave. W.
Suite 600,
Toronto, Ontario, Canada, M3J 2V5
416-665-0022
Richard J. Young
------------------------------
Date: Thu, 20 Mar 86 13:50:20 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: Re: Future AI Language (Vol 4 # 57).
Some AI packages soon could have interfaces to numerical code,
particularly those in process control; expert systems will make
decisions about a fault, then a simulation, written in FORTRAN,
will be run to see if the fix will work.
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
------------------------------
Date: Thu, 20 Mar 86 11:59:30 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: More on Turing and the Turing test.
>From AIList Vol 4 # 56 :- ``: he [Turing] designed it to be nothing more
than a philosophical conversation-stopper.''
>From "Turing's Man : Western Culture in the Computer Age", by J. David
Bolter :- `` It would be a machine that knew men and women better than
they knew themselves. Turing was optimistic about the prospect of this
supercomputer : " I believe that in about fifty years' time it will be
plausible to programme computers ... to make them play the imitation
game so well that an average interregator will not have more than a 70
per cent chance of making the right identification after five minutes
of questioning" (Feigenbaum and Feldman, Computers and Thought, 19).''
Since this is not directly quoting from Turing's own work, it cannot be
regarded as being the giving the true version of his own hopes for the
test. Bolter continues in the next paragraph with :- `` The appeal of
Turing's test is easy to understand. It offers an operational defintion
of intelligence quite in the spirit of behavioral psychology in the
postwar era. A programmer can measure success by statistics - the number
of human subjects fooled by the machine.''
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
------------------------------
Date: Thu, 20 Mar 86 13:49:26 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: More on IQ tests for Computers.
``How a pair of dull-witted programs
can look like geniuses on I.Q. tests.''
This article appeared in the March issue of Scientific American
in the Computer Recreations column of A.K.Dewdney which discusses
the concept of an IQ test for computers, (cf Vol 3 # 164 et seq).
He mentions the HI Q program of Marcel Feenstra, which solves
problems of the "sequence completion" and "numerical analogies"
types. This scores 160 on the corresponding parts of the IQ tests
described by Hans J. Eysenck. Dewdney describes his own putative
program SE Q.
Dewdney paraphrases ``The Mismeasure of Man'' by Stephen J. Gould
and says :- ``What it comes to is this: The traditional I.Q. test
rests on the unstated assumption that intelligence, like strength,
is a single quality of human physiology that can be measured by a
graded series of tasks.''
So far, so good.
He then quotes Gould directly :- `` Our brains are enormously
complex computers''.
Hmmm... getting a bit fishy.
Finally, he says :- `` Does the score on the test measure the
intelligence of the computer? If it does not, just how does one
go about measuring the intelligence of a computer, whether it is
made of silicon and plastic or carbon and tissue? The answer:
Probably not by running some I.Q. program through a battery of
tests.''
Two gripes with this. Who are the carbon/tissue *computers* he is
talking about? Secondly, computers will never be "intelligent";
however software might *appear* intelligent in certain respects.
Nuff said.
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
P.S. Funny, I thought the Answer was 42.
`` The monkey spoke!'' - Zaphod Beeblebrox on Arthur Dent.
------------------------------
Date: Thu, 20 Mar 86 15:51:04 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: Future Ph.D.
The worlds first Ph.D. to an AI system awarded today for PiQ's work
in the field of ...
The World Times, 2185.
The Joka.
------------------------------
Date: Thu, 20 Mar 86 08:34:35 -0500
From: johnson <johnson@dewey.udel.EDU>
Subject: Re: The Turing Test - A Third Quantisation?
|Now, supposing a system has been built which "passes" the test. Why
|not take the process one stage further? Why not try to design an
|intelligent system which can decide whether *it* is talking to machine
|or not?
|
|Gordon Joly
|ARPA: gcj%qmc-ori@ucl-cs.arpa
|UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
Let me get this straight, a human cannot distinguish machine M1 from another
human, but machine M2 *can* distinguish M1 from a human. Will machines of type
M2 then debate about whether it is possible for a human to be modified to pass
the M2turing test? Alternatively, perhaps M2s should try to create M3 s.t.
an M3 cannot be distinguished from a human by an M2, or how about an M4, which
is a machine that an M2 cannot distinguish from an M1? But wait, how can an
M2 be sure that an M4 is not simply a copy of an M1? Is some descendent of the
turing test a test that which tries to infer the nature of the designer from
the design?
-johnson@UDEL.EDU
------------------------------
End of AIList Digest
********************
∂26-Mar-86 0128 LAWS@SRI-AI.ARPA AIList Digest V4 #63
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Mar 86 01:28:02 PST
Date: Tue 25 Mar 1986 22:44-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #63
To: AIList@SRI-AI
AIList Digest Wednesday, 26 Mar 1986 Volume 4 : Issue 63
Today's Topics:
Seminars - Parallel OPS5 and Relational Algebraic Operators (UPenn) &
Mental Representation of Bilinguals (BBN) &
Cognitive Model of Ada-Based Development (SMU) &
An Interactive Proof Editor (Edinburgh) &
Graphical Access To Expert Systems (PARC) &
Automatic Design of Graphical Presentations (SU),
Conference - Expert Systems in Process Safety &
Artificial Intelligence Impacts Forum
----------------------------------------------------------------------
Date: Wed, 19 Mar 86 21:25 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Parallel OPS5 and Relational Algebraic Operators (UPenn)
Forwarded From: Glenda Kent <Glenda@UPenn> on Wed 19 Mar 1986 at 9:52
OPS5 PRODUCTION SYSTEMS AND RELATIONAL ALGEBRAIC OPERATORS
ON A MASSIVELY PARALLEL MACHINE
Bruce K. Hillyer
Columbia University
AI production systems and relational database management systems exhibit
complementary characteristics that suggest the possibility of a synergistic
integration. One difficulty is that both types of systems execute relatively
slowly.
This talk discusses algorithms, performance analyses, and simulation results
for the execution of database queries and production systems on a parallel
machine called NON-VON. The results indicate that relational algebraic
operations will be processed as fast as on special-purpose database
architectures, with speedup linear in the size of the machine, and typical OPS5
production systems will fire more than 850 rules per second.
Thursday, March 20, 1986
Room 216 - Moore School
3:00 p.m. - 4:30 p.m.
------------------------------
Date: 17 Mar 1986 07:55-EST
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Mental Representation of Bilinguals (BBN)
[Forwarded from the MIT bboard by SASW@MIT-MC.]
BBN Labs AI/Education Seminar
Speaker: Prof. Molly Potter, MIT
Title: The Mental Representation of Bilinguals
Date: Friday, March 21st, 2:00pm
Place: 2nd floor large conference room,
BBN Labs, 10 Moulton Street, Cambridge
Are the two lexicons of a bilingual directly interconnected, or
connected via only a common, nonlexical concept? Two experiments
on that question will be discussed, one with novice bilinguals
and one with expert bilinguals (Potter, So, von Eckardt and
Feldman, 1984). Related issues concerning mental representation
in bilinguals will be raised for general discussion.
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Seminar - Cognitive Model of Ada-Based Development (SMU)
Computer Science and Engineering Seminar
Toward A Cognitive Model of Ada Based Embedded System Development
Jerry Snodgrass, Southern Methodist University
(Seminar already held, announcement for record only)
Embedded systems, such as aircraft avionic and hospital intensive car
eunit systems, have been developed for several years. But, the early
steps of the development process have not been researched. The
related research in software engineering has focused on the artifact
and almost entirely ignored the design p rocess used to develop the
artifact. In contrast, the artificial intelligence reeserch
(particularly automatic programming, knowledge-based assistant and
cognition research) has forced a more detailed investigation of the
design processes used in programming. In this seminar emprical
research results are presented along with conceptual results requiring
further research. The empirical results show that the human problem
solving control in the early steps of embedded system development is
essentially the same as the recent cognitive research results in
algorithm and software design. The planned research, for which most
of the conceptual work has been accomplished, involves
1) integrating the Ada language, object-oriented paradigm, and
empirical results into a Uniform Modularity model; and 2) developing a
frame-based software tool to guide and record the process of
determining the structure of the embedded system being developed.
------------------------------
Date: Thu, 20 Mar 86 10:34:10 GMT
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@cs.ucl.ac.uk>
Subject: Seminar - An Interactive Proof Editor (Edinburgh)
EDINBURGH AI SEMINARS
Date: Wednesday 19th March l986
Time: 2.00 p.m.
Place: Department of Artificial Intelligence
Seminar Room
Forrest Hill
EDINBURGH.
Professor R. Burstall, Department of Computer Science, University of
Edinburgh will give a seminar entitled - "An Interactive Proof Editor".
This proof editor works like a structure editor for programmes but
enables one to create proofs in first order intuitionist logic. It
uses attribute grammar techniques with local re-evaluation of
attributes. The idea is due to Tom Reps at Cornell, and the work was
done jointly with Brian Ritchie and Tatsuya Hagino.
------------------------------
Date: 21 Mar 86 15:08 PST
From: Ahenderson.pa@Xerox.COM
Reply-to: Ahenderson.pa@Xerox.COM
Subject: Seminar - Graphical Access To Expert Systems (PARC)
PARC Forum
Thursday, March 27
4PM, PARC Auditorium
Ted Shortlife and Larry Fagan
Medical Computer Science Group
Knowledge Systems Laboratory
Stanford Medical School
GRAPHICAL ACCESS TO EXPERT SYSTEMS: EXAMPLES FROM THE ONCOCIN SYSTEM
The research goals of Stanford's Medical Computer Science group are
directed both toward the basic science of artificial intelligence and
toward the development of clinically useful consultation tools. Our
approach has been eclectic, drawing on fields such as decision analysis,
interactive graphics, and both qualitative and probabilistic simulation
as well as AI. In this presentation we will discuss ONCOCIN, an advice
system designed to suggest optimal therapy for patients undergoing
cancer treatment, as well as to assist in the data management tasks
required to support research treatment plans (protocols). A prototype
version, developed in Interlisp and SAIL on a DEC-20, was used between
May 1981 and May 1985 by oncology faculty and fellows in the Debbie
Probst Oncology Day Care Center at the Stanford University Medical
Center. In recent years, however, we have spent much of our time
redesigning ONCOCIN to run on Xerox 1100 series workstations and to take
advantage of the graphics environment provided on those machines. The
physician's interface has been redesigned to approximate the appearance
and functionality of the paper forms traditionally used for recording
patient status. We have also made changes to correct problems with the
prototype system noted during its clinical use during the early 1980's.
This has involved adopting an object-center knowledge base design which
has provided an increase in the speed of the program while providing
more flexible access to the large amount of knowledge required by the
system. The workstation version of ONCOCIN has recently been introduced
in the Stanford clinic, and we will demonstrate its operation during the
presentation. We will also describe and demonstrate OPAL, the knowledge
acquisition environment we have developed for ONCOCIN so that expert
oncologists can directly enter their knowledge of protocol-directed
cancer therapy using graphics-based forms developed in the Interlisp-D
environment.
This Forum is OPEN. All are invited.
Host: Austin Henderson (Intelligent Systems Lab, 494-4308)
Refreshments will be served at 3:45 pm
Requests for videotaping should be sent to Susie Mulhern
<Mulhern:PA:Xerox or Mulhern.pa> before Tuesday noon.
------------------------------
Date: Wed 19 Mar 86 09:46:55-PST
From: Jock Mackinlay <JOCK@SU-SCORE.ARPA>
Subject: Seminar - Automatic Design of Graphical Presentations (SU)
Automatic Design of Graphical Presentations
PhD Oral Exam
Jock D. Mackinlay
Computer Science Department
Monday, March 31, 10am
History 205
The goal of the research described in this talk is to develop an
application-independent presentation tool that automatically designs
graphical presentations (e.g. bar charts, scatter plots, and connected
graphs) for relational information. There are two major criteria for
evaluating designs of graphical presentations: expressiveness and
effectiveness. Expressiveness means that a design expresses the
intended information. Effectiveness means that a design exploits the
capabilities of the output medium and the human visual system. A
presentation tool is intended to be used to build user interfaces.
However, a presentation tool will not be useful unless it generates
expressive and effective designs for a wide range of information.
This talk describes a theory of graphical presentations that can be used
to systematically generate a wide range of designs. Complex designs are
described as compositions of primitive designs. This theory leads to
the following synthesis algorithm:
o First, the information is divided into components, each
of which satisfies the expressiveness criterion for a
primitive graphical design.
o Next, a conjectural theory of human perception is used
to select the most effective primitive design for each
component. An effective design requires perceptual
tasks of low difficulty.
o Finally, composition operators are used to compose the
individual designs into a unified presentation of all
the information. A composition operator composes two
designs when the same information is expressed the same
way in both designs (identical parts are merged).
The synthesis algorithm has been implemented in a prototype presentation
tool, called APT (A Presentation Tool). Even though only a few primitive
designs are implemented, APT can generate a wide range of designs that
express information effectively.
------------------------------
Date: 21 Mar 86 16:57:50 EST
From: Patty.Hodgson@ISL1.RI.CMU.EDU
Subject: Conference - Expert Systems in Process Safety
"CALL FOR PAPERS"
EXPERT SYSTEMS AND COMPUTATIONAL METHODS
IN PROCESS SAFETY
American Institute of Chemical Engineers (AIChE) Meeting
Houston, Texas, March 29 - April 2, 1987
Sponsored by the divisions on Computing and Systems Technology (10a)
and Safety and Health
Session Chair: Session Co-Chair:
Prof. V. Venkatasubramanian Prof. E. J. Henley
Intelligent Process Engineering Lab Dept. of Chemical Engineering
Dept. of Chemical Engineering University of Houston
Columbia University University Park
New York, NY 10027 Houston, TX 77004
Tel: (212) 280-4453 Tel: (713) 749-4407
Papers are solicited in the areas of Expert Systems and Computational
Methods in Process Safety for the Houston AIChE Meeting. Topics of
interest include Process Plant Diagnosis, Process Safety and
Reliability, Process Risk Analysis etc. Please submit TWO copies of the
abstract by "MAY 15, 1986" to both the session chairman and co-
chairman at the addresses given above.
Final manuscripts of the accepted papers are due by October 15, 1986.
------------------------------
Date: 24 Mar 86 16:02:23 GMT
From: sdcsvax!sdcrdcf!burdvax!ted@ucbvax.berkeley.edu (Ted Hermann)
Subject: Conference - Artificial Intelligence Impacts Forum
ARTIFICIAL INTELLIGENCE IMPACTS FORUM
PRESENTED
BY
AMERICAN COMPUTER TECHNOLOGIES, INC.
May 13, 1986
St. Davids Inn
St. Davids, Pennsylvania
American Computer Technologies, Inc.
237 Lancaster Avenue, Suite 255
Devon, PA 19333
WORKSHOP OBJECTIVES:
describe the business opportunities of Artificial Intelligence technologies
examine the strengths and limitations of these technologies
identify current AI products and services on the market and their potential
applications
analyze companies at the fore-front of the AI market and those expected to
enter soon
analyze current and emerging international markets for AI technology
clarify the business growth opportunities and threats associated with AI
technology
provide an understanding of the potential impact Artificial Intelligence
will have on business
identify promising new frontiers in AI research with applications to the
commercial and military sectors
analyze software and hardware needs for emerging AI markets and assess the
impacts on U.S. business
WORKSHOP SCHEDULE:
Tuesday Morning, 8:00 - 9:45 AM
I. Introduction
Opening Remarks
Creating Computers that Think
Emerging International AI Markets
II. Assessment of AI Opportunities
Expert Systems
Movement in Space
Vision
Natural Language Comprehension
Learning
Tuesday Morning, 10:15 - 12:00 AM
III. Analyses of AI Products and Services
Current/Future Software Packages
Stand-Alone AI Hardware
AI in Personal Computers
Embedded AI Systems
Knowledge Expert Services
IV. Assessment of Competitive Issues
Strategic Computing/Defense Initiatives
New Japanese MITI-ICOT Perspectives
Western European Consortia
Emerging Eastern Bloc Cooperation
Established AI Firms
Emerging AI Ventures
Joint Ventures and R&D Partnerships
Mergers and Acquisitions
Tuesday Lunch, 12:00 - 1:30 PM
IV. Strategic Risks and Constraints
Financial Risks
Social/Legal Risks
Technological Constraints
Market Constraints
Tuesday Afternoon, 2:00 - 3:30 PM
VI. Analyses of End-User Applications
Direct Military Applications
Software Engineering Applications
Non-Military Government Applications
Commercial Applications
Tuesday Afternoon, 3:45 - 5:00 PM
VII. Analyses of Global Trends
Fifth-Generation Machine Architectures
Emerging Fourth-Generation Languages
Other Major Technological Thrusts
Near-Real Time Systems
Economic impact of International AI Markets
Growth of AI products and services
WORKSHOP LEADERS
T. S. Hermann, Ph.D., President of American Computer Technologies,
Inc., has served as the Manager, Plans and Programs at Burroughs' Paoli
Research Center; Director of R&D at Analytics, Inc.; Sr. VP Technology of Sun
Company; President of Franklin Research Center; and President of Mellon
Institute, Carnegie-Mellon University.
Ronald L. Krutz, Ph.D., Director, Computer Engineering Center, Carnegie
Mellon University.
Lewis J. Petrovic, Ph.D., President, Resource Engineering, Inc.
B.K. Wesley Copeland, MBA, President, International Science &
Technology
G. Richard Patton, Ph.D., Ex.VP, Resource Assessment, Inc., and Faculty
Member, Graduate School of Business, University of Pittsburgh
WHO SHOULD ATTEND?
The ARTIFICIAL INTELLIGENCE IMPACTS forum has been established primarily
to address the needs of business persons who are interested in or are
responsible for planning, marketing and manufacturing.
WHAT ARE THE MAJOR ISSUES?
This workshop will assess major AI product opportunities, explore fundamental
trends and market concepts of Artificial Intelligence and will go beyond
conventional strategic assertions within an International business context.
WHAT ARE THE BENEFITS?
THE WORKSHOP will answer the hard business questions of Artificial
Intelligence. Participants will learn of the emerging AI business growth
opportunities; become aware of the key players and their product strategies;
analyze the growing international markets and potential competitors; acquire
forecasts of important technological impacts and thrusts; and will scutinize
the constraints and risks of the AI products.
For Information call Carol Ward, A.C.T., Inc. (215) 687-4015.
------------------------------
End of AIList Digest
********************
∂26-Mar-86 1427 LAWS@SRI-AI.ARPA AIList Digest V4 #64
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Mar 86 14:26:20 PST
Date: Tue 25 Mar 1986 23:00-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #64
To: AIList@SRI-AI
AIList Digest Wednesday, 26 Mar 1986 Volume 4 : Issue 64
Today's Topics:
Queries - Expert Systems in Wine, Medicine, Documentation, Military &
Parallel Implementation of Rule-Based Expert Systems &
Reactions to Cliches & AI Market Survey & AI in Resource Management &
Funding of AI Proposals
----------------------------------------------------------------------
Date: Thu, 20 Mar 86 23:41:05 CST
From: S076786%UMRVMA.BITNET@WISCVM.WISC.EDU
Subject: expert systems/enology
I'd like to contact individuals at university of califorinia-davis involved
with enology as it might pertain to expert systems/artificial intelligence.
Please send a list of anyone involved so that I might contact them in regard to
current research and development.
------------------------------
Date: 19 Mar 86 22:07:16 GMT
From: decvax!mcnc!ecsvax!ircil@ucbvax.berkeley.edu (Ircil N. Gentry)
Subject: HEME Medical Expert System
I am looking for any information on the HEME medical expert
system - diagnosis of hematologic diseases. If you can help me,
please send any information or phone numbers and electronic
addresses of anyone associated with the project at either
Cornell Medical School or Cornell University. My electronic
address is ecsvax!ircil. Thank - you very much.
Chip Gentry
------------------------------
Date: 23 Mar 86 19:57:47 GMT
From: sdcsvax!drillsys!gatech!seismo!ut-sally!ut-ngp!gknight@ucbvax.
berkeley.edu (gknight)
Subject: Neuropsychology expert system inquiry.
Is anyone aware of research or development work on an
expert system for clinical neuropsychological assessment?
If so, please send relevant information to me by e-mail
and I will summarize and post responses.
Thanks,
Gary Knight, 3604 Pinnacle Road, Austin, TX 78746 (512/328-2480).
Biopsychology Program, Univ. of Texas at Austin. "There is nothing better
in life than to have a goal and be working toward it." -- Goethe.
------------------------------
Date: 18 Mar 86 14:17:00 GMT
From: pur-ee!uiucdcs!convex!graham@ucbvax.berkeley.edu
Subject: towards better documentation
Toward better documentation:
Graham's law: The manual is useless.
Corollaries:
1. It's not in the manual.
2. If it is in the manual, you can't find it.
3. If you find it, it's wrong.
I am interested in creating an expert system to serve as on-line documentation.
The intent is to abrogate the above law and corollaries. Does anyone know of
such a system or any effort(s) to produce one?
I am an AI novice. This system is to serve as my introduction to the field.
What referecnes should I read to get started on this? What approach would
you recommend?
Marv Graham; Convex Computer Corp. {allegra,ihnp4,uiucdcs,ctvax}!convex!graham
------------------------------
Date: 21 Mar 86 17:58:07 GMT
From: sdcsvax!noscvax!priebe@ucbvax.berkeley.edu (Carey E. Priebe)
Subject: rule-based expert system
We are searching for an EXISTING rule based expert system.
We intend to implement the selected system on a SIMD machine
using an experimental bit-vector approach to determine the
degree of performance enhancement.
Ideally we would like a time-sensitive, joint services appli-
cation, but any and all proposed systems will be considered.
The one characteristic the system MUST possess is rules. The
closer the system is to a pure-production system the better.
We will recode the inference engine specifically for our parallel
processor.
Anyone with such a system in hand, or pointers to same, should
contact me either via the net or by phone. Any help will be
greatly appreciated.
Additionally, anyone with information on any of the following
systems, please drop me a note:
Application of A I to Tactical Operations
Maj. Timothy Campen
Don E Gordon, HRB-Singer,Inc
Expert Systems for Intelligence Fusion
R Peter Bonasso, The MITRE Corp.
Expert System for Tactical I&W Analysis
Douglas Lenat, Stanford
Albert Clarkson, Garo Kiremidjian, ESL/TRW
Thanx,
Carey Priebe
*********************************
* carey priebe *
* *
* priebe@cod.UUCP *
* priebe@nosc.UUCP *
* priebe@cod.nosc.MIL *
* ucbvax!sdcsvax!noscvax!priebe *
* *
* Naval Ocean Systems Center *
* Code 421 *
* San Diego, CA 92152 *
* *
* Ph. (619) 225-6571 *
*********************************
------------------------------
Date: Fri, 21 Mar 86 23:30 EST
From: KROVETZ%umass-cs.csnet@CSNET-RELAY.ARPA
Subject: cliche's
Does anyone know of any studies or literature relating to
the reactions of people the first time they hear a cliche'?
Thanks,
Bob
krovetz@umass (csnet)
krovetz%umass.csnet@csnet-relay (arpanet)
------------------------------
Date: Friday, 21 Mar 1986 05:33:05-PST
From: wachsmuth%gvaic2.DEC@decwrl.DEC.COM (Markus Wachsmuth)
Subject: AI market(ing) issues.
Does anyone have solidly founded information concerning AI's market potential,
and present share of the software and hardware markets? In reply to this note,
(or directly to myself) I would appreciate responses for the US, Japanese, and
European S/W & H/W markets, in dollar values.
Are any AI market studies available for viewing? If you have copies, please
either attach it as a response to this note, or send it directly to me.
Thank you, in anticipation, for your replies.
Markus Wachsmuth
43 Route de Prevessin
CH-1217 GENEVA, SWITZERLAND
wachsmuth%gvaic2.DEC@decwrl
wax%gvaic2.DEC@decwrl
wachsmuth%gva04.DEC@decwrl
wachsmuth%gvaeis.DEC@decwrl
------------------------------
Date: Fri, 21 Mar 86 13:43:30 est
From: munnari!trlvlsi.trl.oz!andrew@seismo.CSS.GOV
Subject: Small AI companies
I will be visiting the States later this year and I am looking for places
to visit active in AI. Whilst I am familiar with the larger companies and
academic institutions I am aware that I should also perhaps look for the
smaller companies active in AI. Can anyone help ? (Areas of interest :
application of AI to resource management (eg. network management), learning
research, design using AI).
ARPA: andrew%trlvlsi.trl.oz@seismo.css.gov
ACSNET: andrew@trlvlsi.trl
UUCP: !{seismo, mcvax, ucb-vision, ukc}!munnari!trlvlsi.trl!andrew
VOICE: +1 61 3 5416241
Andrew Jennings
Telecom Australia Research Laboratories,
P.O. Box 249
Clayton, Victoria 3168, AUSTRALIA.
------------------------------
Date: Mon 24 Mar 86 15:20:14-PST
From: Daniel Davison <DAVISON@SUMEX-AIM.ARPA>
Subject: funding of AI proposals
I'm developing a pattern recognition system for specific biological
structures (helices in ribosomal RNAs). After a demonstration version
is running (we are currently using OPS5), I'd like to apply for a grant to
continue the work. I'd also like to apply to places other than ONR,
DARPA, and their friends.. I would like to know if there are non-DoD
agencies that fund AI work. I don't think NIH would, but maybe NSF?
By the way, I'm familiar with the work of Abarbanel and coworkers on
pattern recognition for protein structure-this work would derive from
that work but not duplicate it. If anyone knows of other biological AI-
guided pattern recognition, please drop me a line.
Thanks,
dan (davison@sumex-aim.arpa, davison@bnl.arpa)
best e-mail address: bchs6@uhupvm1.bitnet
------------------------------
End of AIList Digest
********************
∂02-Apr-86 0307 LAWS@SRI-AI.ARPA AIList Digest V4 #65
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 2 Apr 86 03:06:46 PST
Date: Sun 30 Mar 1986 22:34-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #65
To: AIList@SRI-AI
AIList Digest Monday, 31 Mar 1986 Volume 4 : Issue 65
Today's Topics:
Seminars - Concurrent Processing with Result Sharing (SMU) &
Planning by Procedural Inference (SRI) &
Processes, Events, and the Frame Problem (CSLI) &
Inexact Reasoning using Graphs (MIT),
Conference - 1st Australian Applied AI Congress &
Knowledge Representation Tools for Expert Systems &
AI Impacts Workshop
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Seminar - Concurrent Processing with Result Sharing (SMU)
Concurrent Processing with Result Sharing: A Unified View
of Efficient Computaitons
Speaker: S. Krishnaprasad, Southern Methodist University
(kp%smu@csnet-relay convex!smu!kp)
Location: 315SIC, Southern Methodist University
Time 2:00 PM
Date: April 3, 1986
Abstract
A major aspect of efficient problem solving is to avoid
reduandant recomputations, This talk identifies the need for and ways
to incorporate both problem structure and problem dynamics, in the context
of concurrent processing, for fast and efficient problem solving.
The notion of horizontal locality and vertical locality are introduced
to capture the essence of problem dynamics. Algorithms for decomposition
under dynamics are discussed for a special class of computations.
A new model of problem solving called Concurrent Processing with Result
Sharing (CPRS) is defined along with measures that characterize efficiency
of problem solving. In a general setting, this model is related to the notion
of working set under concurrent processing environment. A simulation
strategy is presented to prove the usefulness of CPRS model when multiple
concurrent computations compete for limited computational resources.
------------------------------
Date: Wed 26 Mar 86 18:30:50-PST
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - Planning by Procedural Inference (SRI)
PLANNING BY PROCEDURAL INFERENCE
Dan Carnese (CARNESE@SRI-KL)
AI Lab, Schlumberger Palo Alto Research (SPAR)
11:00 AM, MONDAY, March 31
SRI International, Building E, Room EJ228 (new conference room)
The standard approach to plan construction involves applying a general planning
algorithm to a representation of a problem to be solved. This approach will
fail on a given problem when the search space explored by the algorithm is too
large. If this occurs, the only alternatives are to re-encode the problem or
to improve the general algorithm.
In this talk, I'll describe an alternative approach where control of the
planning process is provided by a procedure which constructs proofs from
premises characterizing the domain. This approach allows arbitrary
procedures to be used for control, while retaining the desirable property
that unsound inferences cannot be made.
The technique will be illustrated with examples from the domain of
computer-aided manufacturing.
------------------------------
Date: 25 Mar 86 1134 PST
From: Vladimir Lifschitz <VAL@SU-AI.ARPA>
Subject: Seminar - Processes, Events, and the Frame Problem (CSLI)
PROCESSES, EVENTS, AND THE FRAME PROBLEM
Michael Georgeff
Artificial Intelligence Center, SRI International
and
Center for the Study of Language and Information
Stanford University
Thursday, March 27, 3pm (NB: New time!)
MJH 252
In this talk we will consider various models of actions and events
suited to reasoning about multiple agents situated in dynamic
environments. We will also show how the notion of process is
essential in multiagent domains, and contrast this with most
approaches in AI that are based solely on the allowable behaviors of
agents. We will then consider how we might go about specifying the
properties of events and processes, and whether or not such
specifications require nonmonotonicity or circumscription. Finally,
we will examine various views of the frame problem and see to what
extent some of the major difficulties can be overcome.
------------------------------
Date: Fri 28 Mar 86 11:38-EST
From: "Lisa F. Melcher" <LISA@XX.LCS.MIT.EDU>
Subject: Seminar - Inexact Reasoning using Graphs (MIT)
Wednesday, April 16, 1986
3:45 p.m.....Refreshments
4:00 p.m.....Lecture
NE43 - 512A
JUDEA PEARL
Computer Science Department
UCLA
"Inexact Reasoning Using Graphs"
Probability theory is shunned by most researchers in Artifical Intelligence.
New calculi, claimed to better represent human reasoning under uncertainty,
are being invented and reinvented at an ever-increasing rate. A major reason
for the emergence of this curious episode has been the objective of making
reasoning systems TRANSPARENT i.e., capable of producing PSYCHOLOGICALLY
MEANINGFUL explanations for the intermediate steps used in deriving the
conclusions.
While traditional probability theory, admittedly, has erected cultural
barriers against meeting this requirement, we shall show that these barriers
are superficial, and can be eliminated with the use of DEPENDENCY GRAPHS.
The nodes in these graphs represent propositions (or variables), and the arcs
represent causal dependencies among conceptually-related propositions. We
further argue that the basic steps invoked while people query and update
their knowledge correspond to mental tracings of preestablished links in such
graphs, and it is the degree to which an explanation mirrors these tracings
that determines whether it is considered "psychologically meaningful".
The first part of the talk will examine what properties of probabilistic
models can be captured by graphical representations, and will compare the
properties of two such representations: Markov Networks and Bayes Networks.
The second part will introduce a calculus for performing inferences in Bayes
Networks. The impact of each new evidence is viewed as a perturbation that
propagates through the network via local communication among neighboring
concepts. We show that such autonomous propagation mechanism leads to
flexible control strategies and sound explanations, that it supports both
predictive and diagnostic inferences, that it is guaranteed to converge in
time proportional to the network's diameter, and that every proposition is
eventually accorded a measure of belief consistent with the axioms of
probability theory.
In conclusion, we will show that the current trend of abandoning probability
theory is grossly premature--taking graph propagation as the basis for
probabilistic reasoning satisfies most computational requirements for
managing uncertainties in reasoning systems and, simultaneously, it exhibits
epistemological features unavailable in any competing formalism.
Sponsored by TOC, Laboratory for Computer Science
Ronald Rivest, Host
------------------------------
Date: Wed, 26 Mar 86 15:20:02 est
From: decvax!mulga!aragorn.oz!brian@decwrl.DEC.COM (Brian J. Garner)
Subject: Conference - 1st Australian Applied AI Congress
Call for Papers:
1
11 st
111 AUSTRALIAN
11 ARTIFICIAL
11 INTELLIGENCE
11 CONGRESS
11
1111 Melbourne, November 18-20, 1986
CALL FOR PAPERS
Abstract of papers to be selected for presentation to the 1st Australian
Artificial Intelligence Congress are now invited. The three-part program
comprises:
i) AI in Education
- Intelligent tutors
- Computer-managed learning
- Course developers environment
- Learning models
- Course authoring software
ii) Expert System Applications
- Deductive databases
- Conceptual schema
- Expert system shells (applications and limitations)
- Interactive knowledge base systems
- Knowledge engineering environments
- Automated knowledge acquisition
iii) Office Knowledge Bases
- Document classification and retrieval
- Publishing systems
- Knowledge source systems
- Decision support systems
- Office information systems
Tutorial presenters are also sought. Specialists are required
in the areas of:
- Common loops
- Natural language processing
- Inference engines
- Building knowledge databases
- Search strategies
- Heuristics for AI solving
Format:
ACSnet address: brian!aragorn.oz
CSNET address: brian@aragorn.oz
UUCP address: seismo!munnari!aragorn.oz!brian
decvax!mulga!aragorn.oz!brian
ARPA address: munnari!aragorn.oz!brian@seismo.arpa
decvax!mulga!aragorn.oz!brian@Berkeley
PC diskette to Division of Computing and Mathematics, Deakin University,
Victoria 3217, Australia. Attn. Dr. Brian Garner.
DEADLINES: All submissions by May 16, 1986. Notification by June 30.
Inquiries: Stephen Moore, Director, 1AAIC86, tel: (02)439-5133.
------------------------------
Date: 26 Mar 86 21:06:04 GMT
From: ucdavis!lll-lcc!lll-crg!seismo!mcvax!prlb2!lln-cs!hb@ucbvax.berk
eley.edu (Hubert Broze)
Subject: Conference - Knowledge Representation Tools for Expert
Systems
=================================================================
Conference announcement :
"KNOWLEDGE REPRESENTATION TOOLS FOR EXPERT SYSTEMS"
Louvain-la-Neuve (Belgium), April 21st, 1986.
Place des Sciences, Auditorium A01
Organized jointly by :
L'Unite d'Informatique de l'Universite Catholique de Louvain
The Belgian Association for Artificial Intelligence (BAAI)
The ACM Student Chapter of Louvain-la-Neuve.
PROGRAM :
9 H 30 Participants welcome & Opening of the industrial exhibition
10 H 00 - 11 H 00 F. ARLABOSSE (Framentec, Paris) :
"The representation of Knowledge : the industrial phase"
11 H 15 - 12 H 15 J. FERBER (LRI-Univ. Paris-Sud)
"Reflections in object-oriented languages"
12 H 15 - 14 H 30 lunch
14 H 30 - 15 H 30 P.Y. GLOESS (CNRS & Graphael) :
"OBLOGIS : une implantation orientee objet de la logique
de Prolog et liaison de cette logique avec des objets"
15 H 45 - 16 H 45 R. VENKEN (Bim)
"BIM-Prolog : A new implementation of Prolog"
16 H 45 - 18 H 00 Cocktail (kindly offered by intersem-Sligos)
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
During the whole day, an industrial exhibition will be held with the
participation of Apollo Computer, BIM (Sun), CPP (KES), Ferranti (ART),
IBM, Rank Xerox, Symbolics, Tektronix, Texas Instrument etc.
Participation to the meeting is FREE OF CHARGE
Additional Information may be obtained from :
E. Gregoire, Unite Info, Place Ste Barbe, 2, B1348 Louvain-la-Neuve,
Belgium.
Tel : + 32 10 43 24 15
UUCP : {prlb2,vmucnam}!lln-cs!eg
------------------------------
Date: 26 Mar 86 15:54:10 GMT
From: hplabs!sdcrdcf!burdvax!ted@ucbvax.berkeley.edu (Ted Hermann)
Subject: Conference - AI Impacts Workshop
ARTIFICIAL INTELLIGENCE IMPACTS
WORKSHOP
PRESENTED BY
AMERICAN COMPUTER TECHNOLOGIES, INC.
June 4-6, 1986
FAA Technical Center
Atlantic City Airpot. New Jersey
American Computer Technologies, Inc.
237 Lancaster Avenue, Suite 255
Devon, PA 19333
For Information call Carol Ward, A.C.T., Inc. (215) 687-4148 or write to above
address.
WORKSHOP OBJECTIVES:
describe the business opportunities of Artificial Intelligence technologies
examine the strengths and limitations of these technologies
identify current AI products and services on the market and their potential
applications
analyze companies at the fore-front of the AI market and those expected to
enter soon
analyze current and emerging international markets for AI technology
clarify the business growth opportunities and threats associated with AI
technology
provide an understanding of the potential impact Artificial Intelligence
will have on business
identify promising new frontiers in AI research with applications to the
commercial and military sectors
analyze software and hardware needs for emerging AI markets and assess the
impacts on U.S. business
WORKSHOP TOPICS:
I. Introduction
Opening Remarks
Creating Computers that Think
Emerging International AI Markets
II. Assessment of AI Opportunities
Expert Systems
Movement in Space
Vision
Natural Language Comprehension
Learning
III. Analyses of AI Products and Services
Current/Future Software Packages
Stand-Alone AI Hardware
AI in Personal Computers
Embedded AI Systems
Knowledge Expert Services
IV. Assessment of Competitive Issues
Strategic Computing/Defense Initiatives
New Japanese MITI-ICOT Perspectives
Western European Consortia
Emerging Eastern Bloc Cooperation
Established AI Firms
Emerging AI Ventures
Joint Ventures and R&D Partnerships
Mergers and Acquisitions
IV. Strategic Risks and Constraints
Financial Risks
Social/Legal Risks
Technological Constraints
Market Constraints
VI. Analyses of End-User Applications
Direct Military Applications
Software Engineering Applications
Non-Military Government Applications
Commercial Applications
VII. Analyses of Global Trends
Fifth-Generation Machine Architectures
Emerging Fourth-Generation Languages
Other Major Technological Thrusts
Near-Real Time Systems
Economic impact of International AI Markets
Growth of AI products and services
WORKSHOP LEADERS
T. S. Hermann, Ph.D., President of American Computer Technologies,
Inc., has served as the Manager, Plans and Programs at Burroughs' Paoli
Research Center; Director of R&D at Analytics, Inc.; Sr. VP Technology of Sun
Company; President of Franklin Research Center; and President of Mellon
Institute, Carnegie-Mellon University.
Ronald L. Krutz, Ph.D., Director, Computer Engineering Center, Carnegie
Mellon University.
Lewis J. Petrovic, Ph.D., President, Resource Engineering, Inc.
B.K. Wesley Copeland, MBA, President, International Science &
Technology
G. Richard Patton, Ph.D., Ex.VP, Resource Assessment, Inc., and Faculty
Member, Graduate School of Business, University of Pittsburgh
WHO SHOULD ATTEND?
The ARTIFICIAL INTELLIGENCE IMPACTS workshop has been established primarily
to address the needs of business persons who are interested in or are
responsible for Governmental Program planning, marketing and manufacturing.
WHAT ARE THE MAJOR ISSUES?
This workshop will assess major AI product opportunities, explore fundamental
trends and market concepts of Artificial Intelligence and will go beyond
conventional strategic assertions within an International business context.
WHAT ARE THE BENEFITS?
THE WORKSHOP will answer the hard business questions of Artificial
Intelligence. Participants will learn of the emerging AI business growth
opportunities; become aware of the key players and their product strategies;
analyze the growing international markets and potential competitors; acquire
forecasts of important technological impacts and thrusts; and will scutinize
the constraints and risks of the AI products.
For Information call Carol Ward, A.C.T., Inc. (215) 687-4148 or write to above
address.
------------------------------
End of AIList Digest
********************
∂02-Apr-86 0625 LAWS@SRI-AI.ARPA AIList Digest V4 #66
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 2 Apr 86 06:25:29 PST
Date: Sun 30 Mar 1986 22:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #66
To: AIList@SRI-AI
AIList Digest Monday, 31 Mar 1986 Volume 4 : Issue 66
Today's Topics:
Queries - Eliza & BKG & Public Domain Software &
Lisp Syntax & Basic ATN & Economics of Expert Systems,
Discussion - IQ Tests for Computers & Computer Dialog
----------------------------------------------------------------------
Date: Thu 27 Mar 86 13:26:15-CST
From: AI.HASSAN@MCC.ARPA
Subject: Eliza
Where could I run Eliza (Weizenbaum's program) or get a copy of the
source code? Send reply to hassan@mcc.arpa---Thanks.
H.
------------------------------
Date: Thu, 27 Mar 86 10:35:57 cst
From: Dan Nichols <dnichols%tilde%ti-csl.csnet@CSNET-RELAY.ARPA>
Subject: BKG request
I am interested in obtaining a copy of Hans Berliner's
famous BKG program. Does anyone know of an implementation
in LISP or for UNIX?
I would also love to have a copy of the source for studying.
Can anyone help or can anyone tell me if Mr. Berliner is
on the net and how to reach him?
Please respond to me rather than flooding this list.
*USNail* *electronic*
Dan Nichols USENET: {ctvax,im4u,texsun,rice}!ti-csl!dnichols
POB 226015 M/S 238 ARPA: Dnichols%TI-CSL@CSNet-Relay
Texas Instruments Inc. CSNET: Dnichols@Ti-CSL
Dallas, Texas VOICE: (214) 995-6090
75266 COMPUSERVE: 72067,1465
He o shite shiri-tsubome!
------------------------------
Date: Fri, 28 Mar 86 9:45:20 EST
From: John Shaver STEEP-TMAC 879-7602 <jshaver@apg-3>
Subject: Public Domain Software
I recently found a public doman PROLOG at Simtel20 pd:<pc-
blue.vol157>. Are the other such programs which could be used by
person s with access to and IBM PC or similar computers.
John
------------------------------
Date: 28 Mar 86 10:29:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Lisp syntax for inference engines
OK, I have a dumb question for you Lisp wizards. In any
fact-rule inferencing system, there must be a distinction
between constants and variables. In Prolog and OPS5 these are
clearly distinguished by syntax, a la:
| constant variable
CProlog | red Color (capital letter on variable)
OPS5 | red <color> (angle brackets on variable)
The Lisp analogs would appear to be:
Lisp | 'red color (quote on constant)
Note, for instance that you can bind "color" to "'red", or
to another variable, like "hair-color", or leave it unbound,
just like a good ole variable in Prolog and OPS5. Similarly,
'red has an unchanging, self-evident value, just like a
well-behaved constant.
But in the published algorithms, like in "Lisp" by Winston or
"AI Programming" by Charniak, it seems that some spelling
convention for symbols is dreamed up to distinguish the two, eg,
red (constant) and ?color (variable), and the quoted form is not
used at all. Why not use the mechanism provided directly by the
language? Is this just a matter of taste, that people like to
decorate the variable and not the constant? Or is there some
deep-seated semantic/efficiency-type reason lurking here?
John Cugini <Cugini@NBS-VMS>
National Bureau of Standards
------------------------------
Date: Sun 30 Mar 86 16:28:53-EST
From: John C. Akbari <AKBARI@CS.COLUMBIA.EDU>
Subject: basic atn
are there new, readable introductions to the theory (and implementation!) of
atns? examples of code would be most helpful. anyone researching (or just
hacking) with object-oriented approaches to parsing, PLEASE inform me of
your work (e.g., FLAVORS, LOOPS, NoteCards, etc.). will summarize for
ai bb.
thanks.
john akbari
akbari@cs.columbia.edu
------------------------------
Date: Thu, 27 Mar 86 10:50:41 est
From: munnari!psych.uq.oz!ross@seismo.CSS.GOV (Ross Gayler)
Subject: economics of expert systems - assistance please
I am currently working on a project which, amongst other things, requires me
to find out something about the economics of expert systems. The technology
of expert systems seems to be a classical case of a solution searching for
appropriate problems. I am quite happy to believe that expert systems can
be much more cost-effective than conventional systems for certain classes of
problems, but what are the characteristics of these problems?
Specifically, I would like to know how the implementation costs of expert
systems vary as a function of attributes of the problem (complexity, size,
uncertainty etc.), attributes of the implementors (experience with tools and
domain etc.) and the attributes of the tools (representations, inference
methods, strategies etc.). I would also like to know how the system costs
are distributed across the system life cycle and how all this information
compares with conventional computer systems.
If this was a movie it would be "Yourdon and de Marco do expert systems".
I can't recall having seen any serious discussion of this area. The only
statements have been along the lines of "We coded 10 rules per week" and
unsubstantiated claims for ease of maintenance. I don't actually expect
strong empirical work at this stage but some good conceptual analyses would
be nice. Any references, pointers or opinions would be gratefully accepted.
Ross Gayler | ACSnet: ross@psych.uq.oz
Division of Research & Planning | ARPA: ross%psych.uq.oz@seismo.css.gov
Queensland Department of Health | CSNET: ross@psych.uq.oz
GPO Box 48 | JANET: psych.uq.oz!ross@ukc
Brisbane 4001 | UUCP: ..!seismo!munnari!psych.uq.oz!ross
AUSTRALIA | Phone: +61 7 227 7060
------------------------------
Date: Fri, 21 Mar 86 09:57:28 cst
From: preece%ccvaxa@gswd-vms (Scott E. Preece)
Subject: More on IQ tests for Computers.
Two gripes with this. Who are the carbon/tissue *computers* he is
talking about? Secondly, computers will never be "intelligent";
however software might *appear* intelligent in certain respects.
Nuff said.
Gordon Joly
Do we really want this list to be a battleground for unsubstantiated
personal opinions on the potential for machine intelligence?
scott preece
gould/csd - urbana
uucp: ihnp4!uiucdcs!ccvaxa!preece
arpa: preece@gswd-vms
------------------------------
Date: 25 Mar 86 03:14:07 GMT
From: pur-ee!pucc-j!pucc-h!ahh@ucbvax.berkeley.edu (Mark Davis)
Subject: Computer Dialogue
I have a question that I have been pondering over for some time.
I have asked a few people about it and have received a few
different answers. The question is:
Can a Computer feel, and tell you it's feelings?
I say that if the computer is actually having a bad day (ie. disk
troubles and the like ) that somewhere in the operating system
there should exist some functions to let the user know how it
feels in some friendly way.
I consider this to be a true feeling of the computer.
However many of my associates tell me that this would
be something that is built into the system of an un-living thing,
And that this is only simulated.
I would like to hear your opinions on this subject.
Mark Davis
------------------------------
Date: 24 Mar 86 13:59:30 GMT
From: allegra!mit-eddie!think!harvard!talcott!panda!teddy!mjn@ucbvax.
berkeley.edu
Subject: Re: re: Computer Dialogue #1
> Maybe not, but this only applies to present-day computers. "Some people
> realize that brain cells don't feel emotions any more than toasters do"...
> doesn't mean that a combination of many brain cells cannot, and the same
> could apply to future computers with many times the capability of today's
> computers.
"Some people realize that brain cells don't feel emotions any more than
toasters do"... doesn't mean that a combination of many toasters cannot, and
the same could apply to future toasters with many times the capability of
today's toasters.
Mark J. Norton
{decvax,linus,wjh12,mit-eddie,cbosgd,masscomp}!genrad!panda!mjn
mjn@sunspot
------------------------------
Date: 27 Mar 86 02:52:36 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.berkeley.edu (B.KORT)
Subject: Re: Computer Dialogue
Mark Davis asks if computers have anything akin to human feelings.
One of the most salient of human feelings is pain, which is the
name of the brain state triggered by neural impulses signalling
damage or distress to body tissue.
Consider one of the most complex computers in operation today--a
No. 5 ESS (Electronic Switching System) in the North American
Telephone Network. It has many sensors throughout its equipment
bays which detect loss of functionality. These sensors raise
alarms in the central processor which are functionally equivalent
to the human sensation of pain. The central processor responds
by taking steps to ameliorate the problem. It calls the "doctor"
(craftsperson) for assistance and otherwise takes prudent steps
to protect itself from consequential harm.
On another level of analogy, there is an interesting comparison
between diagnostic messages from a computer and human emotional
responses when faced with a situation ("input") for which
the computer or person is unprepared. (See my Computer Dialogues
#1 and #2 for a somewhat whimsical portrayal of this comparison.)
Leaving aside the semantic issues, one notes a curious mapping
between machine states/brain states and the corresponding
input/output patterns. It seems to me that human feelings
correspond *mutatis mutandis* to functionally equivalent
phenomena within computers and other complex systems.
--Barry Kort ...ihnp4!hounx!kort
------------------------------
Date: 23 Mar 86 15:03:20 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.berkeley.edu (B.KORT)
Subject: Re: Computer Dialogue #1
Dear Charles and Peter,
Please understand that I wrote Computer Dialogues #1 and #2 as "flights
of fancy" to imagine some of the problems that might arise when
self-programming computers begin to interact with each other. I gave
the computers some anthropomorphic emotions, thinly disguised as
diagnostic messages. My goal was to bridge the gulf between those who
love machines and those who dread them. [...]
For those who are interested in the deeper philosophical issues of the
soul, may I recommend the two short stories by Terrell Miedener in
The Mind's I. One is the touching story of a chmimpanzee with an
enquiring mind entitled The Soul of Martha, a Beast. The other is
about a mechanical mouse with a survival instinct entitled The Soul
of the Mark III Beast.
Regards,
Barry
------------------------------
End of AIList Digest
********************
∂08-Apr-86 0207 LAWS@SRI-AI.ARPA AIList Digest V4 #67
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 Apr 86 02:07:29 PST
Date: Mon 31 Mar 1986 08:57-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #67
To: AIList@SRI-AI
AIList Digest Monday, 31 Mar 1986 Volume 4 : Issue 67
Today's Topics:
Applications - Machine Translation & Automated Documentation,
Book - Machine Learning: A Guide To Current Research,
Journals - Aviation Week Technical Survey &
Dr. Dobbs Journal AI Issue & AI in Engineering,
Theory - P = NP ?,
Linguistics - Ambiguity,
AI Tools - FORTRAN
----------------------------------------------------------------------
Date: Fri, 21 Mar 86 17:37 EST
From: Steve Dourson - Delco <dourson%gmr.csnet@CSNET-RELAY.ARPA>
Subject: Machine Translation of Documents
I am quoting the following article from the Dayton SIGART newsletter
dated March 13, 1986:
COMMERCIAL MACHINE TRANSLATION
Business Week (# 2912, 9/16/85, pp. 90D ff.) reports in an article by
Joyce Heard with Leslie Helm that several companies are active in
developing machines to produce commercial translations of documents.
This article describes translation systems that are currently
available for translating English, German, French, Spanish, Italian
and Japanese. Speeds of up to 100,000 words per hour are claimed, as
are accuracies of up to 90% and prices as low as $3000. (Not all the
same system of course). Customers are apparently willing to accept
rough translations as long as they can get them quickly; translators,
however, are not happy just polishing machine translations. Most of
the companies offering multilingual services are converting text to a
"neutral" language, then into the target language -- this greatly
reduces the cost of additional source or target languages.
-----
I haven't seen the original article. It may be worth investigating if
any of these machines could deliver a usable rough translation.
Perhaps the collection of papers could be machine-translated and
surveyed. Selected papers would be professionally translated.
Stephen Dourson
dourson%gmr.csnet@CSNET-RELAY.ARPA (arpa)
dourson@gmr (csnet)
------------------------------
Date: Wed, 26 Mar 86 14:27:00 pst
From: George Cross <cross%wsu.csnet@CSNET-RELAY.ARPA>
Subject: Re: towards better documentation
>>I am interested in creating an expert system to serve as
>>on-line documentation.
You probably want to look at Nathaniel Borenstein's dissertation
The Design and Evaluation of On-Line Help Systems
CMU, 1985 Available as Technical Report CMU-CS-85-151
In addition to a description of Borenstein's system, this has a large
bibliography and discussion of existing systems.
---- George
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
George R. Cross cross@wsu.CSNET
Computer Science Department cross%wsu@csnet-relay.ARPA
Washington State University faccross@wsuvm1.BITNET
Pullman, WA 99164-1210 Phone: 509-335-6319 or 509-335-6636
------------------------------
Date: 27 Mar 86 09:12 EST
From: WAnderson.wbst@Xerox.COM
Subject: towards better documentation
An excellent recent article entitled "Interactive Documentation" by P.J.
Brown (Computing Lab, The University, Canterbury, Kent) appears in the
March 1986 issue of Software -- Practice and Experience. The full
reference is:
Brown, P.J., Interactive Documentation, Software -- Practice and
Experience, Vol 16(3), March 1986, pp. 291-299.
He talks to many issues relating to display of documentation, and
describes a tool that "allows readers of computer-based documents to
peruse these documents at any desired level of detail" (from the
Abstract). Especially interesting is his distinction between
"replace-buttons" and "glossary-buttons."
Bill Anderson
------------------------------
Date: 27 Mar 86 10:36:08 est
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: towards better documentation
Date: 18 Mar 86 14:17:00 GMT
From: pur-ee!uiucdcs!convex!graham@ucbvax.berkeley.edu
Subject: towards better documentation
I am interested in creating an expert system to serve as on-line
documentation. The intent is to abrogate the above law and
corollaries. Does anyone know of such a system or any effort(s) to
produce one? [...]
Frankly, it sounds like a black hole to me. Building an expert system
to do something that people don't know how to do very well is
generally a bad idea. The ubiquity of crummy documentation is prima
facie evidence that creating *good* documentation isn't yet a widely
understood art.
Nevertheless I'll toss some ideas out, first trying to figure out what
the functionality of this system is supposed to be.
Maybe you're talking about automatically generating documentation from
existing source code. You might start with Rich & Waters' stuff on
the programmers apprentice, also Bob Balzer's stuff at USC-ISI. See
IEEE Transactions on Software Engineering, November 1985, as one place
to start looking for lots of other related references. The problem
here is extracting the users-eye-view from an implementation. I would
think it would be easier to extract it from the original specification
(assuming it exists).
Another thing you might mean is building a ``user's assistant'' for a
complicated program. Along these lines I can suggest Mike
Genesereth's work (circa 1977, MIT) on the ``MACSYMA advisor,'' a
design with some interesting ideas. Also I seem to recall that people
have done expert systems for advising users on how to use a
complicated set of models embedded in a packages of dozens of FORTRAN
subroutines. E.g. statistical, econometric, ecological models. I
believe there was a paper in ECAI-84 on one of these, maybe Bundy was
an author (big help, eh?)? Also I believe there is an ongoing project
at Berkeley on a user's assistant for Unix. The idea is to be able to
ask it things like like ``how do I get these 90 files copied from one
machine to another'' and it makes a plan and guides you through the
steps, modifying the plan as it goes to deal with contingencies (e.g.
the machine you want to copy to turns out to have no network
connection so you have to make a tape). (I wonder what the program
says if you ask it ``how can I get back the files I accidentally
deleted?'' :-) To be useful the system needs to be able to generate
plans of action from its own knowledge of the program. Makes a good
forcing function on its knowledge.
Finally, maybe you just mean building an expert system that knows a
lot about a particular program, and presents hunks of canned text on
various topics. However, I don't see what such a thing could possibly
boil down to anything other than an index. Somebody still has to
figure out what to put in the index. To make it ``smart'' you need to
think about how to build that index automatically, or have it defined
implicitly by having the program search the hunks of canned text for
strings that match things the user is asking about. But then you have
the natural language problem on your hands again due to synonyms, verb
vs noun forms, etc. Ick. And the problem with this last approach of
course is that it doesn't abrogate Graham's Law: the canned text is,
after all, canned. The system is not an expert on the program, it's
an expert on the manual! The only way to abrogate the law is to have
the system look at the source code of the program... and then you're
back in the black hole again.
Well, good luck. I hope these ramblings may lead to something helpful
(and provoke errata from more knowledgeable readers).
-w
------------------------------
Date: 24 Mar 86 14:34:36 EST
From: GABINELLI@RED.RUTGERS.EDU
Subject: Machine Learning: A Guide To Current Research
[Forwarded from the Rutgers bboard by Laws@SRI-AI.]
MACHINE LEARNING: A Guide To Current Research (a collection of 77
papers--most of which were contributed by participants at the last ML
Workshop held in June, 1985) is being offered by the publisher,
Kluwer Academic Publishers, at a special pre-publication rate of
$27.95 (shipping included). This is a discount of 30% off the regular
price. [...]
Jo Ann Gabinelli
------------------------------
Date: Wed 26 Mar 86 09:08:28-PST
From: Oscar Firschein <FIRSCHEIN@SRI-IU.ARPA>
Subject: Aviation Week Technical Survey
AILIST readers might be interested in the following:
Aviation Week and Space Technology, Feb. 17, 1986 has a technical
survey of artificial intelligence, mostly applied to military
applications. Included are the DARPA-supported programs in Pilot's
Associate and the Autonomous Land Vehicle (ALV) and the VLSI lisp
machine being built by Texas Instruments.
Company profiles include McDonnell Aircraft's work in the Pilot's
Associate and avionics maintenance expert system; Boeing's AI Center;
MITRE's work in natural language understanding; Grumman's decision
support systems; Hughes AI center; and Westinghouse avionics
troubleshooting expert system.
------------------------------
Date: Fri 28 Mar 86 13:15:10-CST
From: Werner Uhrig <CMP.WERNER@R20.UTEXAS.EDU>
Subject: pointer: Dr. Dobbs Journal (April 86) The Annual AI Issue
TABLE OF CONTENTS
Programming in LISP and PROLOG
24 AI: BRIE - The Boca Raton Inference Engine
by Robert Jay Brown III
An exploration of artificial intelligence techniques, using LISP,
PROLOG, and Expert-2.
An Expert at Life
42 AI: A Cellular Automation in Expert-2
by Jack Park
Jack visited our pages two years ago with an expert system for
predicting the weather. This little game could teach even more
about AI tools.
46 AI: Modeling a System in PROLOG
by Sheldon D Softky
PROGLOG may be the language of choice for some very practical tasks,
says the author.
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Journal - AI in Engineering
The International Journal for Artificial Intelligence in Engineering is
a new quarterly available from Computational Mechanics Publications,
subscription only, price $130. Please apply to Computational Mechanics Inc.,
Suite 6200, 400 West Cummings Park, Woburn, MA 01801, USA. (USA, Canada
and Mexico). Ashurst Lodge, Ashurst, Southampton SO4 2AA, England
for others.
------------------------------
Date: 29 March 1986 2129-EST
From: Andreas Nowatzyk@A.CS.CMU.EDU
Subject: P=NP Is this for real?
[Forwarded from the CMU bboard by Laws@SRI-AI.]
Article 355 of net.research:
From: ghgonnet@watdaisy
Title: P = NP by E.R. Swart, Department of Computing and Information
Science, University of Guelph, Research Report CIS86-02, February 1986.
Abstract:
A mathematical progamming formulation of the Hamilton circuit problem
involving zero/one restrictions and triply subscripted variables is
presented and by relaxing the zero/one restrictions and adding additional
linear constraints together with additional variables, with up to as
many as 8 subscripts, this formulation is converted into a linear
programming formulation. In the light of the results of Kachiyan
and Karmakar concerning the existence of polynomial time algorithms
for linear programming this establishes the fact that the Hamilton
circuit problem can be solved in polynomial time. Since the Hamilton
circuit problem belongs to the set of NP-complete problems it follows
that P = NP.
------------------------------
Date: Fri, 21 Mar 86 12:01:34 pst
From: Allen VanGelder <avg@diablo>
Subject: P=NP(?) still open
[Forwarded from the SRI bboard by Laws@SRI-AI.]
[...]
> From: lawler@ernie.berkeley.edu (Eugene Lawler)
> Subject: Swart's paper
> Not surprisingly, it seems to be fatally flawed. Bob Solovay started
> reading it carefully, found gaps in proofs, wrote Swart about them.
> The P=NP question is still with us, I believe. --Gene Lawler
------------------------------
Date: Wed, 26 Mar 86 14:28:58 EST
From: Bruce Nevin <bnevin@bbncch.ARPA>
Subject: ambiguity
Then there is this from the ←Electric←Kool-Aid←Acid←Test . . .
on a tree at the foot of the driveway from the commune to the
main road was this sign:
No Left Turn Unstoned
A triple (at least) pun in four words!
Bruce
------------------------------
Date: Mon, 24 Mar 86 23:46:05 EST
From: "Keith F. Lynch" <KFL@AI.AI.MIT.EDU>
Subject: AI languages
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Some AI packages soon could have interfaces to numerical code,
particularly those in process control; expert systems will make
decisions about a fault, then a simulation, written in FORTRAN,
will be run to see if the fix will work.
Why should the numerical routines be written in FORTRAN rather than
Lisp? Is this just for dusty decks, or is it proposed that new
FORTRAN code be written for this?
...Keith
------------------------------
Date: Wed, 26 Mar 86 13:24:52 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: Re: AI Languages.
> Why should the numerical routines be written in FORTRAN rather than
> Lisp? Is this just for dusty decks, or is it proposed that new
> FORTRAN code be written for this? /Keith Lynch <KFL@ai.ai.mit.edu>
I agree that LISP code can be faster than FORTRAN. Certainly MACLISP
produces fast numerical code. But most of the software effort for
numerical simulations, goes into FORTRAN, be it 66, 77 or 8X!
So they ain't just those dusty decks of cards.
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
------------------------------
End of AIList Digest
********************
∂08-Apr-86 0410 LAWS@SRI-AI.ARPA AIList Digest V4 #68
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 Apr 86 04:10:40 PST
Date: Tue 8 Apr 1986 00:07-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #68
To: AIList@SRI-AI
AIList Digest Tuesday, 8 Apr 1986 Volume 4 : Issue 68
Today's Topics:
Seminars - Tek Tools and Technology (Ames) &
Machine Learning, Clustering and Polymorphy (Rutgers) &
Feedback During Skill Acquisition (CMU) &
Growing Min-Max Game Trees (MIT) &
State, Models, and Qualitative Reasoning (MIT) &
Functional Computations in Logic Programs (UPenn)
----------------------------------------------------------------------
Date: Tue, 1 Apr 86 08:32:09 pst
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Seminar - Tek Tools and Technology (Ames)
From: MER::ANDREWS
National Aeronautics and Space Administration
Ames Research Center
AMES AI FORUM
SEMINAR ANNOUNCEMENT
Tektronix AI Tools and Technology
Tektronix Representatives:
Steve Levine - AI Specialist
Brad Martinson - Systems Analyst
Tamarah Day - Sales Engineer
Tuesday, April 8, 1986 10:30 - 11:30 am
B 239 rm B39 (Life Sciences Basement Auditorium)
NASA Ames Research Center
Agenda:
10:30 - 11:00 Slide presentation - AI history overview
Question & answer period
11:00 - 11:30 or Product demonstrations on the Tektronix 4404 and 4406
12:00 Artificial Intelligence Workstations. Demonstrations
will include a Preliminary Expert Ground Analysis
Scheduler developed by Harris Corporation for Kennedy
Space Center to assist in the scheduling of ground
processing activities. Also presented will be an
electronic circuit board diagnostic expert system and
applications of software prototyping and user
interfacing.
point of contact: Alison Andrews (415)694-6741
mer.andrews@ames-vmsb.ARPA
N.B. For those of you who cannot make it to this Ames AI Forum, Tektronix
is having a similar presentation and demo on April 3, with the
following agenda:
8:30-9:00 Coffee and doughnuts
9:00-10:30 Presentations (AI Overview, AI at TEK Labs, Managing
the Knowledge Engineering Process)
10:30-11:15 Demonstrations
11:15-11:30 Summary
11:30-12:00 Questions and Answers
12:30-4:00 Afternoon Schedule
R.S.V.P. Mary Clement (408)496-0800
Tektronix is located at 3003 Bunker Hill Lane (just off Great
America Parkway, near cross street Betsy Ross), Santa Clara.
Attendees of the April 3 demo will not be shown the Kennedy Space
Center expert system, so do try to make it to the Ames AI Forum,
despite the lack of doughnuts!
------------------------------
Date: 3 Apr 86 16:56:24 EST
From: PRASAD@RED.RUTGERS.EDU
Subject: Seminar - Machine Learning, Clustering and Polymorphy (Rutgers)
MACHINE LEARNING COLLOQUIUM
Machine Learning, Clustering and Polymorphy
Stephen Jose Hanson
and
Malcolm Bauer
Bell Communications Research
and
Princeton University Cognitive Science Laboratory
April 8, Tuesday
#423, Hill Center
I will describe a conceptual clustering program (WITT) that
attempts to model human categorization. Experiments will
also be described in which the output of WITT and other
Conceptual clustering programs will be compared to the
performance of human subjects using the same stimuli.
Properties of categories to which human subjects are
sensitive includes best or prototypical members, relative
contrasts between putative categories, and polymorphy
(neither necessary or sufficient features). Polymorphy (m
out of N, m < N) represents a weakening of conjunction
predicates which still seem to be of an order that is
learnable to humans. Wittengentein refers to polymorphy as
a basis for a category theory in which category "criteria"
determine the nature of the membership rule.
This approach represents an alternative to usual
Artificial Intelligence approaches to generalization,
conceptual clustering and semantic analysis which tend to
focus on common feature rules, impoverished category
structure, and simple search and match schemes. WITT uses
feature inter-correlations, category structure (prototypes,
basic levels, etc..) and a conservative search strategy in
order to construct a set of categories given objects defined
on a multi-valued feature list. Information retrieval was
used for a test domain for WITT in order to discover
reasonable categories from the psychological abstracts,
which were subsequently compared to psychologists from
Princeton psychology department sorting the same abstracts.
Another test domain involved constructing meta-level
categories for nations of the world, where semantic features
were extracted from a machine readable version of the 1985
World Almanac. WITT discovered concepts like "third world
countries" and "european countries" and "technologically
advanced countries".
** If you wish to host the speakers or meet with them, please send
a message to PRASAD@RUTGERS.ARPA
------------------------------
Date: 4 April 1986 1433-EST
From: Cathy Hill@A.CS.CMU.EDU
Subject: Seminar - Feedback During Skill Acquisition (CMU)
Impact of Feedback Content during Initial
Skill Acquisition
Jean McKendree
Wednesday, April 9 12:00-1:30 pm
****** BH 340A ******
Most theories of learning and skill acquisition acknowledge the
importance of feedback, particularly after errors. However, none of
them are explicit about the content of this information. I will
present hypotheses about the efficacy of different sorts of feedback
content and relate them briefly to current information processing
theories. I will then present the results from experiments
which vary information content after errors and which begin to look at
differences in experience level. The proposed experiment will use
verbal protocols as well as quantitative data to better
understand the usefulness of different sorts of information for
error correction. A simulation model will attempt to compare the
impact of these different types of information assuming an identical
starting point.
------------------------------
Date: Mon, 7 Apr 1986 18:12 EST
From: JHC%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Seminar - Growing Min-Max Game Trees (MIT)
[Forwarded from the MIT bboard by SASW@MIT-MC.]
Thursday , April 10 4:00pm Room: NE43 8th floor Playroom
The Artificial Intelligence Lab
Revolving Seminar Series
A New Procedure for Growing Min-Max Game Trees
David McAllester
AI Lab, MIT
In games such as chess decisions must be based on incomplete search
trees. A new tree-growth procedure is presented which is based on
"conspiracy numbers" as a formal measure of the accuracy of the root
minimax value of an incomplete tree. Trees can be grown with the goal
of maximizing the accuracy of the root value. Trees grown in this way
are often deeper and narrower than alpha-beta optimal trees with the
same number of nodes. On the other hand, if all nodes have the same
static value then the new procedure reduces to d-ply search with
alpha-beta pruning. Unlike B* search, non-uniform growth is achieved
without any modification of the static board evaluator.
------------------------------
Date: Mon, 31 Mar 1986 17:21 EST
From: JHC%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Seminar - State, Models, and Qualitative Reasoning (MIT)
[Forwarded from the MIT bboard by SASW@MIT-MC.]
The Artificial Intelligence Lab
Revolving Seminar Series
State, Models, and Qualitative Reasoning
Jerry Roylance
AI Lab, MIT
Qualitative reasoning, modeling, and representations of state are
important issues in AI. Machines need interesting models of their
task and methods that enable them to reason with those models.
Without models machines can offer little help in relieving the
programmer's or system builder's workload.
A conventional program is a literal description of what to do. By
investing the program with a model of what it is doing and some methods,
we can use code that is both simpler and more believable. Numerical
subroutines, for example, have several unifying ideas about search,
approximation, and transformation. Using these ideas directly (rather
than the results of the ideas) eliminates a lot of ugly code.
While qualitative reasoners gain their power in the simplicity of their
algebra, they pay a price in resolving the ambiguity that that
simplicity produces. We look at the simplifications that qualitative
reasoners do in light of the mathematical properties of the original
equations, the choice of distinguished values, and traditional
simulation methods.
Modeling a world is a difficult problem. State is a part of modeling
that is not described very well; the best descriptions that we have are
Moore machine descriptions that the current state and the inputs give us
the next state. Better, goal-oriented, descriptions that do more than
just simulation are needed.
Thursday, April 3 4:00pm Room: NE43 8th floor Playroom
Refreshments at 3:30pm
------------------------------
Date: Mon, 31 Mar 86 11:09 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Functional Computations in Logic Programs (UPenn)
Forwarded From: Glenda Kent <Glenda@UPenn> on Mon 31 Mar 1986 at 10:42
FUNCTIONAL COMPUTATIONS IN LOGIC PROGRAMS
Saumya K. Debray
SUNY at Stony Brook
Tuesday, April 1, 1986
Room 216 - Moore School
3:00 - 4:30 p.m.
While the ability to simulate nondeterminism and return multiple outputs for a
single input is a powerful and attractive feature of logic programming
languages, it is expensive in both time and space. This overhead is especialy
undesirable because programs are very often functional, i.e. do not return more
than one output for any given input, and so do not use this feature of these
languages. This talk describes how programs may be analyzed statically to
determine which literals and predicates are functional, and how the program may
then be optimized using this information. Our notion of "functionality"
subsumes the notion of "determinacy" that has been considered by various
researchers. The algorithm we describe is less reliant on features such as
cut, and thus extends more easily to parallel evaluation strategies, than
others that have been proposed.
------------------------------
End of AIList Digest
********************
∂08-Apr-86 0713 LAWS@SRI-AI.ARPA AIList Digest V4 #69
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 Apr 86 07:13:22 PST
Date: Tue 8 Apr 1986 00:12-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #69
To: AIList@SRI-AI
AIList Digest Tuesday, 8 Apr 1986 Volume 4 : Issue 69
Today's Topics:
Seminars - Metaplanning: Controlling Planning in a Complex Domain (CMU) &
Rule-Based Systems and Heuristic Classification (SU) &
The MACE System (USC) &
Expert Systems for System Management (MIT) &
Temporal Theorem Proving (SRI) &
Network Propagation for Reasoning about Uncertainty (CMU) &
Optical Artifical Intelligence Research in ECE (CMU) &
Pragmatic Modeling: Robust NL Interface (MIT)
----------------------------------------------------------------------
Date: 1 April 1986 0108-EST
From: Paul Birkel@A.CS.CMU.EDU
Subject: Seminar - Metaplanning: Controlling Planning in a Complex Domain (CMU)
Metaplanning:
Controlling Planning in a Complex Domain
Dissertation Proposal
Friday, April 4th
1:00-2:30 PM
Wean Hall 5409
All planners metaplan; few do so explicitly. Many planners find very
simple control mechanisms sufficient; the added overhead of a metaplanner
outweighs any apparent advantages. Whether implementing an explicit
metaplanner increases the capabilities of the resulting system is unknown.
Complex domains, such as therapy planning, include problems which would
be best handled by a metaplanner identifying and choosing alternative
planning strategies separate from the process of plan generation. These
problems include: unresolvably conflicting goals, conflicting measures of
goal satisfaction, unreliable operators, and incompletely specified initial
states. Previous therapeutic (@b<MYCIN>, @b<ONCOCIN>) and non-therapeutic
(@b<NOAH>, @b<SIPE>) planners alike are incapable of explicitly reasoning
about, and solving, combinations of these types of problems. A hierarchical
therapeutic planner will be implemented based on a @b(MOLGEN/SPEX) hybrid
architecture incorporating both tactical planning and strategic metaplanning
components. Four additional planning techniques are proposed which will be
developed and integrated into the architecture. The metaplanner will
subsequently be extended to achieve acceptable clinical performance on two
dozen clinical cases covering all combinations of these problems. The
performance of the system with and without the planning extensions, and
with and without the metaplanner will be analyzed.
A copy of the thesis proposal is available in the
CS lounge, 4th floor, Wean Hall. Please contact me
for additional copies of the proposal (its long!).
birkel@a or x3074
------------------------------
Date: Mon 31 Mar 86 18:31:41-PST
From: Christine Pasley <pasley@SRI-KL>
Subject: Seminar - Rule-Based Systems and Heuristic Classification (SU)
CS529 - AI In Design & Manufacturing
Instructor: Dr. J. M. Tenenbaum
Title: Rule-based Systems; Application to Heuristic Classification
Speaker: William J. Clancey
From: Knowledge Systems Laboratory
Date: Wednesday, April 2, 1986
Time: 4:00 - 5:30
Place: Terman 556
This talk provides an broad overview of expert systems research,
using the Neomycin program as an example. We consider in particular the
rule-based knowledge representation, showing how rules can be controlled
by an inference procedure. Generalizing from this example, we consider
first the heuristic classification method of problem solving, showing how
a broad range of well-structured problems--embracing forms of diagnosis,
catalog selection, and skeletal planning--are solved in typical expert
systems. Next, we consider kinds of problems that expert systems can be
used to solve, emphasizing the idea of a "system in the world" that is being
synthesized or analyzed. Finally, we introduce the idea of a qualitative
model, showing how different kinds of network formalisms are used in expert
systems to describe processes. The material in this talk will enable you
to relate the kinds of problems, solution methods, and representations in
expert systems.
------------------------------
Date: 2 Apr 1986 18:45-EST
From: gasser@usc-cse.usc.edu
Subject: Seminar - The MACE System (USC)
USC DPS GROUP MEETING
Wednesday, 4/9/86 3:00 - 5:00 PM
Seaver Science 319
Les Gasser will speak on the MACE system.
MACE is a testbed for building generic Distribiuted AI systems from
organized collections of active "intelligent" entities called
@i[Agents] which run in parallel. It comprises a language for
describing agents, a language for describing a network of processors
upon which the agents run, and a simulator for executing the agents
in parallel. This talk will describe the philosophy and design goals of
MACE, the current versions of the MACE description languages, the
MACE simulator, and briefly discusses several experimental MACE
implementations.
The MACE language is constructed in two parts: the MACE Agent Description
Language which is sufficient for expressing agents or collections of
agents at any level (including composite agents), and the MACE
Environment Description Language which describes the underlying
computation hardware and simulator parameters. Individual
agents may draw upon other existing languages.
MACE has been implemented in COMMON LISP on a TI Explorer Lisp
Machine. We have several trial systems implemented (*) or partially
implemented (-).
- An ACTORS-like recursive Fibonacci computation which
we have tested by creating up to 90 agents running in parallel.*
- An agent called BUILDER which interactively builds other agents
through a second agent called USER-INTERFACE, both agents running
in parallel.*
- An agent-based production system where each rule is an agent, and
there is no global database nor centralized inference engine. (-)
- An 8-node hypercube with MACE agents running on each node, and a
parallel broadcast facility among agents.*
- A distributed, multi-level blackboard built of agents. (-)
- A two-robot cooperative planner. (-)
Questions: Dr. Les Gasser, (213) 743-7794, gasser@usc-cse.usc.edu
------------------------------
Date: Wed 2 Apr 86 08:54:20-EST
From: Natalie F. Tarbet <NFT@XX.LCS.MIT.EDU>
Subject: Seminar - Expert Systems for System Management (MIT)
[Forwarded from the MIT bboard by SASW@MIT-MC.]
Fourth in a series of seminars on Large and Complex Computer
Systems in the Commercial World
"Expert Systems for System Management and Control
or
What jobs in a large computing center can be automated?"
Keith R. Milliken
IBM, Thomas Watson Research Center
Yorktown Heights, NY
NE43-512A
Wednesday, April 2, 1986 at 3:15 p.m.
Several years ago, IBM's Thomas Watson Research Center began to
develop an expert system to assist with the operation of a large
computing complex. This expert system, called YES/MVS (Yorktown
Expert System / MVS Manager), runs in real-time and can either
give advice or automatically take actions to manage computing
resources and respond to problems in a running system. This system
is of interest because it actively helps control, in real-time,
a very complex process. YES/MVS has been used extensively in the
Yorktown Computing Center, and a second version is now being developed.
We will briefly describe YES/MVS and then focus on some of the expert
system issues that have arisen during YES/MVS development and the
approaches taken to resolve them. Two of the issues that will be
emphasized are (1) knowledge representation for process control
expert systems and (2) approaches to knowledge base organization
that reduce the difficulty involved in modifying a large knowledge base.
The latter issue is especially important in the automation of computing
system operation because there are large variations between computing
centers in operational policy.
We shall briefly describe related efforts to automatically analyze the
performance of large computing systems, to deveop a special purpose
shell for computer performance expert systems and to use rule-based
techniques to control resource allocation in a large computing system.
Host: Arvind
------------------------------
Date: Wed 2 Apr 86 17:21:06-PST
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - Temporal Theorem Proving (SRI)
TEMPORAL THEOREM PROVING
Martin Abadi (MA@SAIL)
Stanford University
11:00 AM, MONDAY, April 7
SRI International, Building E, Room EJ228 (new conference room)
In spite of the wide range of applications of temporal logic,
proof techniques (especially for first-order temporal logic (FTL))
have been quite limited up to now. We have developed a proof system R
for FTL. The system R is based on nonclausal resolution; proofs are
natural and generally short. Special quantifier rules, unification
techniques, and a resolution rule are introduced. The system R is
directly useful for such tasks as verification of concurrent programs
and reasoning about hardware devices. Other uses of temporal resolution,
such as temporal-logic programming, are currently being considered.
We relate R to other proof systems for FTL and discuss completeness issues.
In particular, one variant of R is ``as complete as'' an extension of Peano
Arithmetic. We also describe resolution systems analogous to R for other modal
logics. In fact, the resolution techniques and the corresponding completeness
arguments apply to a large class of modal logics.
------------------------------
Date: 2 April 1986 1720-EST
From: Betsy Herk@A.CS.CMU.EDU
Subject: Seminar - Network Propagation for Reasoning about Uncertainty (CMU)
Speaker: Judea Pearl, UCLA
Date: Tuesday, April 15
Time: 3:30 - 5:00
Place: 5409 Wean Hall
Title: Network propagation for reasoning about uncertainty
Abstract:
In order to meet requirements of modularity, transparency and
flexibility, the designers of 1st-generation expert systems have
abandoned traditional probability theory and ventured to devise new
formalisms for managing uncertainties. The talk will describe a
message-passing scheme in propositional networks which, using
traditional probability theory, fulfills these objectives of
expert systems technology.
I will argue that the notion of TRANSPARENCY is closely related to
reasoning with GRAPHS, namely, that an argument is perceived to be
"psychologically meaningful" if its derivational steps correspond
to mental tracings of pre-established links in some conceptual
dependency network. Accordingly the first part of the talk will
introduce an axiomatic legitimization of representing inferential
dependencies by networks, and will compare the properties of two
such representations: Markov Networks and Bayes Networks.
The second part will introduce a calculus for performing inferences
in Bayes Networks. The impace of each new evidence is viewed as a
perturbation that propagates through the network via asynchronous
local communication among neighboring concepts. We show that such
propagation mechanism facilitates flexible control strategies and
sound explanations, that it supports both predictive and diagnostic
inferences, that it is guaranteed (in sparse graphs) to converge in
time proportional to the network's diameter, and that every
proposition is eventually accorded a measure of belief consistent
with the axioms of probability theory.
------------------------------
Date: 3 April 1986 1023-EST
From: Richard Wallstein@A.CS.CMU.EDU
Subject: Seminar - Optical Artifical Intelligence Research in ECE (CMU)
Robotics Seminar
3:30 Friday April 11, 4623 Wean Hall
David Casasent, Director
Center for Excellence in Optical Data Processing
Department of Electrical and Computer Engineering
OPTICAL ARTIFICAL INTELLIGENCE RESEARCH IN ECE
Optical feature extraction and correlation distortion-invariant multi-class
multi-object recognition and identification research will be reviewed. This
will be followed by a discussion of optical artificial intelligence efforts
currently in progress. This effort includes: optical relational graph and
decision net processors, optical symbolic processors, optical associative
memory processors, and optical neural net processors.
------------------------------
Date: 4 Apr 1986 09:57-EST
From: Brad Goodman <BGOODMAN at BBNG>
Subject: Seminar - Pragmatic Modeling: Robust NL Interface (MIT)
[Forwarded from the MIT bboard by SASW@MIT-MC.]
BBN Laboratories Inc.
Science Development Program
AI/Education Seminar
Speaker: Professor Sandra Carberry
University of Delaware
Title: Pragmatic Modeling: Toward a Robust Natural
Language Interface
Date: Tuesday, April 15th, 10:30 a.m.
Place: 2nd floor large conference room
BBN Labs, 10 Moulton Street, Cambridge
PRAGMATIC MODELING:
TOWARD A ROBUST NATURAL LANGUAGE INTERFACE
Naturally occurring dialogue is both imperfect and incomplete. Not
only does the information-seeker fail to communicate all aspects of his
underlying task and partially constructed plan for accomplishing it, but
also his utterances are often imperfectly or imcompletely formulated. It
appears that human information-seekers expect an information-provider
to facilitate a productive exchange by assimilating the dialogue and
using this knowledge to remedy many of the information-seeker's faulty
utterances.
This talk will describe an on-going research effort aimed both at
developing techniques for inferring and constructing a user model from
an information-seeking dialogue and at identifying strategies for using
this model to develop more robust natural language interfaces. Emphasis
will be on the dynamic construction of the task-related plan
motivating the information-seeker's queries, and its application
in handling pragmatically ill-formed and incomplete utterances.
------------------------------
End of AIList Digest
********************
∂09-Apr-86 0104 LAWS@SRI-AI.ARPA AIList Digest V4 #70
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 Apr 86 01:04:16 PST
Date: Tue 8 Apr 1986 21:30-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #70
To: AIList@SRI-AI
AIList Digest Wednesday, 9 Apr 1986 Volume 4 : Issue 70
Today's Topics:
Queries - BKG Backgammon & LISP Machines & Games,
Applications - Machine Translation,
Correction - Research Credits for Aviation Week Survey,
AI Tools - Common Lisp Systems & Borland Prolog,
Book - Machine Learning: A Guide to Current Research,
Databases - Nonmilitary AI Jobs & Reference Database on Logic,
Techniques - Rete Algorithm Survey
----------------------------------------------------------------------
Date: 27 Mar 86 16:37:00 GMT
From: pur-ee!uiucdcs!convex!ti-csl!dnichols@ucbvax.berkeley.edu
Subject: BKG request
I am interested in obtaining a copy of Hans Berliner's
famous BKG program. Does anyone know of an implementation
in LISP or for UNIX?
I would also love to have a copy of the source for studying.
Can anyone help or can anyone tell me if Mr. Berliner is
on the net and how to reach him?
Please respond to me rather than flooding this list.
*USNail* *electronic*
Dan Nichols USENET: {ctvax,im4u,texsun,rice}!ti-csl!dnichols
POB 226015 M/S 238 ARPA: Dnichols%TI-CSL@CSNet-Relay
Texas Instruments Inc. CSNET: Dnichols@Ti-CSL
Dallas, Texas VOICE: (214) 995-6090
75266 COMPUSERVE: 72067,1465
He o shite shiri-tsubome!
------------------------------
Date: Wed, 2 Apr 86 15:46:09 EST
From: reiter@harvard.HARVARD.EDU (Ehud Reiter)
Subject: LISP machines
Has anyone done a price/performance comparison of LISP machines with
conventional workstations running LISP? If so, could they please send
me the results of their investigations? I will summarize to the net if
there is a lot of interest.
My interest is academic (price/performance of different computer architectures)
not practical. My initial hypothesis, based on looking over Richard Gabriel's
book PERFORMANCE AND EVALUATION OF LISP SYSTEMS and on talking to people is
that special LISP processors offer a 2-3 fold speed advantage over a SUN 3 or
MicroVAX II class workstation, but at 2-3 fold greater cost. Microcoded
architectures like Xerox's D-machines seem to offer little performance
improvement.
Please note that I am NOT interested in software issues like how good an
environment a machine provides. This is strictly a hardware comparison.
Thanks.
Ehud Reiter
reiter@harvard.ARPA
reiter@harvunxh.BITNET
harvard!reiter.UUCP
------------------------------
Date: Tue, 8 Apr 86 08:12 ???
From: Black holes are where God is dividing by zero
<SHERZER%ti-eg.csnet@CSNET-RELAY.ARPA>
Subject: Wanted: info on game playing systems
Can anyone give me any information on game playing AI programs? I
am especially interested in systems that play games where there is
a great deal of uncertainty.
Poker (or any card game) would be a good example. This is because
a Poker player does not have complete information about the other
players hand. The player is therefore forced to deduce the other
players hand by observing his play.
Chess would be a bad example because there is no missing
information. All possible moves for both players are known with
total certainty.
I would also be interested in any programs that build models of
a users behavior (especially a hostile one) with the goal of
guessing future behavior.
Thanks in advance
Allen Sherzer
SHERZER@TI-EG.CSNET
------------------------------
Date: 8 Apr 86 09:49 EST
From: Gocek.henr@Xerox.COM
Subject: Re: Machine Translation of Documents
I read a similar report that said machines are translating 100,000 pages
of text per year for various applications, and in some cases reach 95
percent accuracy. The article I read, which was printed in the
Rochester Democrat & Chronicle on Sunday, 4/6/86, appeared to be
prompted by Xerox's use of machine translation. (Xerox is big in
Rochester.) The 95 percent accuracy was reached only in very
specialized applications, though. Highly technical applications where
the technical jargon is unambiguous is a good application for machine
translation. The European Common Market is trying to use a machine
translation system and is not obtaining 90 percent accuracy.
Gary
Gocek.Henr@Xerox.Com
------------------------------
Date: Tue 8 Apr 86 13:19:44-PST
From: GARVEY@SRI-AI.ARPA
Subject: Re: Aviation Week Technical Survey
I think you should have given credit where credit is due: for example,
the DARPA Pilot's Associate program is also jointly supported by
Lockheed-Georgia and McDonnell Aircraft Company, since they together
are providing approximately half of the total $20 million. Likewise,
the Autonomous Land Vehicle is jointly supported by DARPA and
Martin-Marietta and the first Navy Battle-Management Program (FRESH)
is partially supported by TI.
Cheers,
Tom
------------------------------
Date: 25 Mar 86 1257 PST
From: Les Earnest <LES@SU-AI.ARPA>
Subject: Common Lisp systems
We have been reviewing Common Lisp implementations that run on Sun workstations.
The principal alternatives appear to be those made by Lucid and marketed
by Sun (415 965-780), Franz Inc. (415 769-5656) and Kyoto University, which
is marketed by Ibuki (415 949-1126). We expect to be getting some of each
of these implementations for various purposes. Ibuki's product
description is attached.
Les Earnest
********************************************************************************
KCL PRODUCT DESCRIPTION
Kyoto Common Lisp (KCL) is a full implementation of Common Lisp. It
contains all the Common Lisp functions, macros and special forms defined
in the Common Lisp Reference Manual. It has both a compiler and an
interpreter. Full sources are available for modification.
KCL was developed at the Research Institute for Mathematical Sciences,
Kyoto University, Kyoto, Japan by Masami Hagiya and Taiichi Yuasa.
It is used throughout Japan for building expert systems and conducting
research in Artificial Intelligence.
THE FEATURES OF KCL
-- KCL is complete: It supports all Common Lisp functions, macros and
special forms defined in the Common Lisp Reference Manual; COMMON LISP:
THE LANGUAGE, by Guy L. Steele et al, Digital Press, 1984.
-- A complete KCL is small: It is only 1.4 MB with interpreter and
compiler loaded. For customers with source code, this core image may be
made even smaller by separating the compiler, intepreter and run-times
and making everything inessential autoloadable.
-- KCL is efficient: Its compilation time (including the two passes)
and run time (both of compiled and interpreted code) have times
comparable with the other Commmon Lisps present on the market
(benchmarks appear in the KCL report).
-- The kernal of KCL is written in C and the rest in Common Lisp itself.
Thus KCL is totally embedded in the C language and provides clean
access to the underlying operating system.
-- KCL uses C and the standard C libraries as the interface to the
operating system. Using the standard I/O facilities greatly enhances
the portability of KCL.
-- The KCL compiler is a two pass compiler with a first pass from LISP
to C and a second from C to compiled code. This allows the use of
any optimizing C running on the machine to create efficient code which
is totally compatible with preexisting compiled C code.
-- Having a kernel written in C and compiling to C, KCL is highly
portable and independent of the machine and operating system. It
currently runs on the machines of six manufacturers and more are being
added soon.
-- All KCL versions are made from the same sources. This means that
all versions behave the same and any Common Lisp code can be cross-
compiled (by the KCL compiler) and the C code generated can be used
on any of the systems running KCL.
-- The runtime efficiency of interpreted code has been as important a
design criterion as the efficiency of compiled code. This, together
with its small size makes KCL appropriate for teaching. Educational
discounts are available.
IBUKI is dedicated to providing high quality software that is fairly priced
and allows the people using it maximal flexibility to get their problems
solved. We believe in symbolic computing and want to make it available
on a wide scale. For this reason we provide source code and simple,
inexpensive licensing arrangements.
Versions for VAXes and SUNs running UNIX 4.2 bsd are currently available
in the US and are being distributed by IBUKI. For commercial use,
distribution fees are $700 per CPU for the object code and an additional
$700 for the sources. For educational institutions the distribution fees
are $450 object and sources respectively. Quantity discounts are available.
For further information about ordering, contact
IBUKI
399 Main Street
Los Altos, CA 94022
Phone: 415 949-1126
Telex: 348369
Netmail: KCL@SU-Carmel.ARPA
------------------------------
Date: 31 Mar 86 21:33:29 GMT
From: dual!islenet!jayf@ucbvax.berkeley.edu (Jay Fields)
Subject: Borland Prolog
I just read in today's Infoworld that Borland has announced
a new Prolog for the IBM priced at 99.95. They didn't say,
"Sorry, one per customer," either.
Aloha,
J Fields
PRC, Honolulu
...ihnp4/islenet/jayf
/* The usual disclaimers go here */
------------------------------
Date: 3 April 1986 1616-EST
From: Jaime Carbonell@A.CS.CMU.EDU
Subject: Yet another ML book...
[Forwarded from the CMU bboard by Laws@SRI-AI.]
Not to be confused with "Machine Learning Vol I" and "... Vol II",
Kluwer Academic Publishers is coming out with a book titled:
"Machine Learning: A Guide to Current Research", which contains
a zillion (i.e. 77) very short papers -- rather than a lot fewer, but
much more detailed papers of the two ML volumes. Thus, the
Kluwer book is very useful as a survey and guide to the symbolic
machine learning field, but not as useful for in-depth analysis
of techniques, ideas or applications. Most of the short papers
are revised versions of those presented at the 1985 Machine
Learning III workshop.
[...] There's a 30% discount on the 39.95 price and no shipping cost
(hence: 27.95) for prepaid orders received "soon" (ignore the April 1
date on the form).
------------------------------
Date: Tue, 1 Apr 1986 14:23 EST
From: HENRY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AI Jobs
A while back on this list, I mentioned a job bulletin board
sponsored by High Technology Professionals for Peace. It is
now available. The number is (617) 969-2273, and hours of
operation are after 5 PM Eastern time weekdays and all day weekends.
It lists employers recruiting for non-military jobs. Later
versions of the system will provide keyword retrieval.
------------------------------
Date: 3 Apr 86 22:22:39 GMT
From: allegra!mit-eddie!think!harvard!seismo!mcvax!ukc!dcl-cs!nott-cs!
abc@ucbvax.berkeley.edu (Andy Cheese)
Subject: Reference Database on Logic
I currently post out a reference database on functional and logic
languages, denotational semantics and formal methods to various people.
It is never up to date but i add more when i have the time. If anybody
is interesting in recieving a copy, i post it at the beginning of every
month, please reply and i will add you to my distribution list.
Andy Cheese
Department of Computer Science
University of Nottingham
University Park
Nottingham
NG7 2RD
England
ARPA : abc@uk.ac.nott.cs
UUCP : ukc!nott.cs!abc
Andy Cheese
------------------------------
Date: Wed 2 Apr 86 10:46:43-PST
From: Matt Heffron <BEC.HEFFRON@USC-ECL.ARPA>
Subject: Rete query summary
Thanks to all who replied to my query about Rete algorithm info.
Here is a summary of the replies:
From: Dan Scales <SCALES@SUMEX-AIM.ARPA>
I'm doing a master's thesis on modifying the Rete network
implementation in OPS5 to be more efficient for an AI architecture
called SOAR built on top of OPS5. The main references for the Rete
network itself are:
Forgy, C. L., On the Efficient Implementation of Production Systems.
PhD thesis, Dept. of Computer Science, CMU, February, 1979.
Forgy, C. L. Rete: A Fast Algorithm for the Many Pattern/Many Object
Pattern Match Problem, Artifical Intelligence 19(1), September 1982,
17-37.
Also, you should try to get the OPS5 (or other OPS) source code. I
assume it is freely distributed, since we have it here at Stanford.
Unfortunately, it is not commented at all.
←←←←←←←←←←←←←←←←
From: Duke Briscoe <duke@mitre.ARPA>
... The person in the office next to mine has implemented the Rete
algorithm. It doesn't sound like he had too much trouble doing it.
I guess the tricky part is keeping track of variable bindings for
different invocations of a rule.
←←←←←←←←←←←←←←←←
From: Robert Farrell <farrell@YALE.ARPA>
Lee Brownston, Elaine Kant, Nancy Martin and I have a book called
"Programming Expert Systems in OPS5" available that describes the
algorithm in some detail. Also look at Forgy's AAAI article about
how to implement them in assembler and his thesis from CMU.
Or you can contact Forgy directectly at Forgy@CMU-CS-A.
Also Liz Allen (used to be at MD) has hacked up one in the YAPS system,
so she would be of help. Please don't contact me - I'm too busy.
←←←←←←←←←←←←←←←←
FROM: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
In response to your query regarding Rete algorithms, here
is a reference to a conference that will be published in April.
It may prove useful to you:
D BOOK22 Applications of Artificial Intelligence\
%I Society of Photo-Optical Instrumentation Engineers\
%D 1-3 April 1986\
%N 635\
%C Orlando
%A L. Lafferty
%A D. Bridgeland
%T Scavenger: an Experimental Rete Compiler
%B BOOK22
%K AI01
←←←←←←←←←←←←←←←←
From: Dan Miranker <DAN@CS.COLUMBIA.EDU>
A cornerstone of my thesis, which I am just completing, is
the development of a new production system algorithm, TREAT,
and its comparison to RETE.
The preliminary results are just coming in. Even though
TREAT was motivated by the algorithmic requirements of parallel
processing it is doing better even in a sequential environment.
I have an OPS5 implementation just coming to life. It appears that
TREAT reduces the number of comparisons to do variable binding by
about 30%. (TREAT does more work on an add to wm, but eliminates all
the work RETE has to unwind when doing a delete). TREAT also doen't
use any of the "beta memories", which can be combinatorially explosive
in size. So it does better in space as well. The absolute speed of
the two OPS5 implementations,(mine and Forgy's) is currently roughly
the same, but we haven't yet made any attempt to cleanup and speed up
our code.
The TREAT algorithm is also much easier to implement. Our run
time interpreter is 4 pages of LISP compared to Forgy's 12.
The TREAT algorithm was described in the 1984 International
conference on fifth generation computing, held in Tokyo.
There is a slight error in the algorithm as published. If you
think you will be implementing TREAT let me know and I'll finally
insert the correction into the tech report version and send
that to you.
←←←←←←←←←←←←←←←←
From: Jim Wogulis <wogulis@ICSE.UCI.EDU>
We have a production system here that was developed my a number
of people over a long period of time. Currently, Pat Langley
has taken over maintaining/improving the system. It is written
in Franzlisp, and I have ported it to Interlisp-D.
Prism uses a rete net to store all the partial matches from
the rules and facts. We will send it to anyone who is willing
to pay for the taping charges (I think $100 for tape or floppy
and $30 for the manual). This might help since there would
be code to look at.
←←←←←←←←←←←←←←←←
Matt Heffron
------------------------------
End of AIList Digest
********************
∂09-Apr-86 0328 LAWS@SRI-AI.ARPA AIList Digest V4 #71
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 Apr 86 03:27:55 PST
Date: Tue 8 Apr 1986 21:51-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #71
To: AIList@SRI-AI
AIList Digest Wednesday, 9 Apr 1986 Volume 4 : Issue 71
Today's Topics:
Conferences - AAAI &
Ames Symposium on Manufacturing Systems &
Automated Reasoning Workshop 1986 &
Knowledge Engineering Forum
----------------------------------------------------------------------
Date: 4 Apr 86 13:58:40 GMT
From: decvax!linus!raybed2!gxm@ucbvax.berkeley.edu (GERARD MAYER)
Subject: Conference - AAAI
The national conference on artificial intelligence AAAI-86 is Aug 11-15, 1986
Philadelphia, PA. Send program and registration inquiries to: AAAI-86, AAAI,
445 Burgess Dr., Menlo Park, CA 94025. This year there will be sessions (as
in the past) and a new emphasis on workshops. See AI magazine, winter 1986
for more information.
Gerard Mayer
Raytheon Research Division
uucp ..linus!raybed2!gxm
------------------------------
Date: Tue, 1 Apr 86 08:33:51 pst
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Conference - Ames Symposium on Manufacturing Systems
From: MER::ANDREWS
National Aeronautics and Space Administration
Ames Research Center
SYMPOSIUM
MODELING AND CONTROL IN FLEXIBLE MANUFACTURING SYSTEMS
Friday, April 11, 1986
The fiels of ARTIFICIAL INTELLIGENCE and AUTOMATIC CONTROL have been
developing independently of one another despite many intrinsic common
interests. A series of symposia is planned to explore this common ground
to better understand what are the long-range issues and fruitful directions
of basic research in AUTOMATIC CONTROL THEORY.
The present sysmposium is organized by Professor Giuseppe Menga, Department
of Automation and Information, Politecnico di Torino, Italy.
PROGRAM: Friday, April 11, 1986
Morning
9:30 - 10:00 Yu-Chi Ho, Harvard University
Opening Address - Modern System Theory in Manufacturing Applications
10:00 - 11:00 Giuseppe Menga, Politecnico di Torino
Modeling Flexible Manufacturing Systems by Heuristic Network Analysis
11:00 - 12:00 Yu-Chi Ho, Harvard University
Perturbation Analysis in Discrete Event Dynamic Systems: An
Application to Manufacturing
Afternoon
1:00 - 2:00 Giuseppe Menga, Politecnico di Torino
The Planning and Control System for Flexible Manufacturing Shops
2:00 - 3:00 Agostino Villa, Politecnico di Torino
Planning and Control in Multi-Stage Multi-Product Systems
The symposium will be held in Conference Room 172 in Building 233. For
additional information, please contact anyone listed below:
Ralph Bach (415)695-5429 Rajiv Mehta x5440 George Meyer x5444
mar.bach@ames-vmsb.ARPA
***************************************************************************
VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18. Do not
use the Navy Main Gate.
Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance. Submit requests to the point of
contact indicated above. Non-citizens must register at the Visitor
Reception Building. Permanent Residents are required to show Alien
Registration Card at the time of registration.
------------------------------
Date: Sat, 5 Apr 86 18:35:15 cst
From: stevens@anl-mcs.ARPA (Rick Lyndon Stevens)
Subject: Conference - Automated Reasoning Workshop 1986
Automated Reasoning Workshop 1986
Mathematics and Computer Science Division
Argonne National Laboratory
You are invited to a workshop on automated reasoning to be held
at Argonne National Laboratory on June 24 and 25, 1986. This
workshop, the fifth of its kind, will take the form of a set of
tutorials. Our first objective is to acquaint people with the
basic aspects of automated reasoning and with the possible appli-
cations. Thus we shall discuss some of the previously open ques-
tions we have solved and feature topics such as the design of
logic circuits, the validation of existing circuit designs, and
proving properties of computer programs. Our second objective is
to learn of new problems on which the current methodology might
have an impact. In fact, the preceding workshops did lead to
such discoveries, as well as to collaborative efforts to seek
solutions to these problems. Enclosed is a tentative schedule
that briefly describes the various talks. On the first day, we
shall begin with an introductory lecture on what automated rea-
soning is. We shall illustrate the various concepts first with
puzzles. Next, we shall focus on some applications of automated
reasoning. We shall include a demonstration of an automated rea-
soning program (ITP) that is portable, runs on relatively inex-
pensive machines, and is available to other users. On the second
day we shall give an introduction to Prolog, discuss additional
applications, and focus on state/space problems. On both days,
we have scheduled reviews of the material and open discussions.
We welcome you to this 1986 workshop on automated reasoning.
Participation will require a small charge, no more than $60. In-
cluded in this fee will be the cost of the book Automated Reason-
ing: Introduction and Applications, written by Wos, Overbeek,
Lusk, and Boyle and published by Prentice-Hall. This book covers
the field of automated reasoning from its basic elements through
various applications. Its tutorial nature will guide our approach
to the workshop. We urge you to respond to this invitation as
soon as possible for, to retain the tutorial atmosphere of the
workshop, we may be forced to limit the number of participants.
The order in which requests are received will be an important
parameter in issuing invitations to attend the workshop.
Sincerely,
L. Wos
Senior Mathematician
Please send all replies to
ARPA: wos@anl-mcs.arpa
or
Dr. Larry Wos
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne IL 60439
Schedule for Automated Reasoning Workshop 1986
June 24-25, 1986
Argonne National Laboratory
Argonne, Illinois
Tuesday, June 24
9:00 - 9:15 Preliminary remarks - Larry Wos
9:15 - 10:15 Introduction to automated reasoning - Larry Wos
10:15 - 10:30 Break
10:30 - 11:30 Solving reasoning puzzles - Brian Smith
11:30 - 12:30 Lunch
12:30 - 1:15 Choices of strategies and inference rules
- Rusty Lusk
1:15 - 1:30 Demonstration
1:30 - 1:45 Break
1:45 - 2:45 Proving properties of computer programs - Jim Boyle
2:45 - 3:00 Closing discussion - Larry Wos
Wednesday, June 25
9:00 - 9:15 Discussion - Larry Wos
9:15 - 10:15 Introduction to Prolog - Rusty Lusk
10:15 - 10:30 Break
10:30 - 11:30 State-space problems - Rusty Lusk
11:30 - 12:30 Lunch
12:30 - 1:15 Circuit design and validation - Jim Boyle
1:15 - 1:45 Open problems in mathematics and logic - Rusty Lusk
1:45 - 2:00 Break
2:00 - 2:45 Details of the solution of an open problem in logic
- Larry Wos
2:45 - 3:15 Our automated reasoning software - Rusty Lusk
3:15 - 3:30 Closing remarks - Larry Wos
------------------------------
Date: Thu, 3 Apr 86 10:45:07 est
From: Tom Scott <scott%bgsu.csnet@CSNET-RELAY.ARPA>
Subject: Conference - Knowledge Engineering Forum
I. ANNOUNCEMENT
Announcing
KNOWLEDGE-ENGINEERING FORUM
Tuesday, May 6, 1986
University of Wisconsin-Green Bay
Christie Theatre
Announcing a conference on knowledge engineering (KE) and
applications of artificial intelligence (AI) in business and industry
in the Northeastern Wisconsin area. Featured are presentations by
practitioners in the field, demonstrations of hardware and software,
and an executive briefing/group discussion on developing applications
and building an in-house KE group in your own situation.
The fee for attending the conference is $30.00. Enrollment in
the conference is limited. For further information about attendance
and fee payment, please contact
Prof. Dennis Girard
College of Environmental Sciences
University of Wisconsin-Green Bay
Green Bay, WI 54301-7001
Phone: 414-465-2285 (office)
414-465-2371 (secretaries)
II. SCHEDULE OF EVENTS
8:30 Registration and coffee hour
9:00 "Welcome" by David Jowett, Vice-Chancellor for
Academic Affairs, UW-Green Bay
9:15 "An Overview of Knowledge Engineering: The Theory,
Practice, and Technology of Knowledge-Based Decision-
Support Systems" by Roger Pick, Assistant Professor,
Information Systems, Graduate School of Business,
UW-Madison
9:45 "Artificial Intelligence and Knowledge Engineering: A
Perspective on the Future" by Clarke Harrison,
Symbolics, Inc., Chicago, IL
10:15 Break
10:30 "Knowledge Engineering: A Practical Perspective" by
Stephen Zvolner, Senior Research Scientist, Johnson
Controls, Milwaukee, WI
11:30 Lunch and informal group discussions
1:00 Executive Briefing/Group Discussion
(1) Executive Briefing: "Knowledge Engineering
Methodology" by Gene Korienek, Johnson Controls,
Milwaukee, WI
(2) Group Discussion: "Developing an In-House
Knowledge-Engineering Group" by those attending
the conference
2:30 Break
2:45 Hardware and Software Demonstrations (to be announced)
3:45 Review and Closing
III. COMMENTS AND TENTATIVE OUTLINE OF GENE KORIENEK'S EXECUTIVE
BRIEFING ON KE METHODOLOGY
The cornerstone of the conference is the 1:00-2:30 slot, which
is dedicated to the executive briefing on KE methodology and the group
discussion on developing in-house KE groups. The executive briefing
will be presented by Gene Korienek of Johnson Controls. Gene is well
versed in the theory, practice, and technology of knowledge
engineering and will integrate his presentation on KE methodology with
the group discussion on developing in-house KE groups.
The key to the integration of the topics of the executive
briefing and the group discussion is to view KE methodology on an
object level and the developing of in-house KE groups on a meta level.
Executives and managers are concerned with the design, building, and
maintaining of KE groups, which in turn are concerned with the design,
building, and maintaining of KE systems: executives and managers build
groups that build KE systems. In order to build a KE group, one must
have at least a general idea how to build a KE system. The two topics
are intimately related and are best considered in one breath.
Gene plans to complete his presentation in the first hour
(1:00-2:00). During that time he will solicit questions and comments
and will generally encourage group participation. The last half hour
(2:00-2:30) will be given over to the dynamics of group discussion.
Gene's presentation on KE methodology will include some of the
following points of interest:
(1) KE methodology in general: What is the methodology for
the engineering of knowledge? Does KE methodology differ from
previous methodologies for the design, building, and maintenance of
MIS and EDP applications? If there is a difference, what is it? Does
the process of iterative development and testing occur more in
knowledge engineering than in MIS/EDP? What role does Prolog play in
the prototyping of KE systems?
(2) The recruiting, training, and maintaining of personnel to
staff an in-house KE group: How can local talent be developed? Do KE
personnel have to be trained and imported from Silicon Valley and
Boston, or can they be trained locally? Once the personnel are
recruited and trained, how can they be maintained? How does a
corporation in Northeastern Wisconsin keep the interest and education
level of its in-house KE group alive? What is to prevent members of a
KE group from leaving the local area for greener pastures on the East
and West Coasts?
(3) The acquisition and development of hardware and software
environments to be used by an in-house KE group in the development of
KE systems: Why have DEC-compatible systems and the VAX computer
family been so popular in the AI/KE community? How can the Unix
development environment for KE systems be integrated with the IBM
environment that many corporations have installed for MIS/EDP
applications?
How do the GNU Project and the emergence of freeware as a
viable economic force affect a corporation's strategic KE
plan? What is the GNU Project Emacs?
What is GNU Emacs? It is said that Emacs is more than an
editor: Emacs is an entire development environment which fits
naturally and effortlessly into a Unix development
environment. Why is this the case?
Should a business or industrial corporation that plans to
develop an in-house KE group follow the traditional academic
AI/KE path of DEC, VAX, Unix, and GNU Emacs, or should the
corporation instead follow the commercial path laid out by
IBM? What are the theoretical, practical, and technological
considerations for comparing, contrasting, and integrating the
DEC/VAX/Unix/Emacs environment with the IBM environment?
(4) The human process of actually building KE systems: What
are the group dynamics involved in the process of building KE systems?
How do this process and the group dynamics of in-house KE groups
differ from what takes place under the MIS/EDP paradigm?
IV. EXECUTIVE SUMMARY AND OUTLINE OF POSSIBLE TOPICS FOR THE GROUP
DISCUSSION ON DEVELOPING IN-HOUSE KNOWLEDGE-ENGINEERING GROUPS
The development of an in-house knowledge-engineering group is
a deliberate and gradual process that unfolds within a corporation's
long-range strategic plan. This process requires a commitment on both
the corporate and community levels in order to train, recruit, and
maintain the human resources and to acquire and develop the
knowledge-engineering environment. There are three areas to consider
in the development of in-house KE groups: A. Individual Corporate
Action; B. Community Action; C. A Vision of Knowledge Engineering.
A. Individual Corporate Action
(1) Cooperation with other businesses in the training and
maintaining of local personnel
(2) A team to fulfill the five basic functions of each KE
project:
(a) Project leader
(b) Domain expert--hence the name "expert system"
(c) Conceptualist: Plan, design, and document
(d) Encoder: Implement and test
(e) Systems programmer: Unix and IBM systems
B. Community Action
(1) Formation of an ACM-SIGART chapter
(2) Teaching of AI languages (Lisp, Prolog), production
systems (ITP, OPS5, OPS83), and KE courses in area high
schools, technical colleges, and at the university (both
undergraduate and graduate levels)
(3) Establishment of a regional AI/KE training center for
Northeastern Wisconsin at the university level
C. A Vision of Knowledge Engineering
(1) The Knowledge Age: theory, practice, and technology
(a) The practical focus of KE on decision-support
systems (DSS) and information-retrieval systems
(IRS) differentiates KE from AI.
(b) Such articles as "Why Computers May Never Think
Like People" (Hubert and Stuart Dreyfus,
"Technology Review", January, 1986) are of
immediate benefit to KE and of questionable value
to AI.
(2) Theory, practice, and technology: A modern structure in
America and Japan inherited from ancient Greece (theoria,
praxis, and techne)
(a) Forthcoming Prentice-Hall manuscript, "A Vision of
Knowledge Engineering" by Tom Scott (Autumn 1987)
(b) Japanese R&D projects in AI/KE: Fifth Generation
Computing System (FGCS) and Sixth Generation
Computing System (SGCS)
(c) MCC: America's cooperative challenge to Japanese
FGCS and SGCS
V. FINAL COMMENTS
Since the detailed format and content of the conference are
still being arranged, the schedule of events and comments in the above
four sections (I-IV) are subject to change. For information on the
final schedule and attendance at the conference, please contact Prof.
Dennis Girard at the phone number or address listed in section I.
* * *
Tom Scott CSNET: scott@bgsu
Dept. of Math. & Stat. ARPANET: scott%bgsu@csnet-relay
Bowling Green State Univ. UUCP: cbosgd!osu-eddie!bgsuvax!scott
Bowling Green OH 43403-0221 ATT: 419-372-2636 (work)
------------------------------
End of AIList Digest
********************
∂09-Apr-86 0550 LAWS@SRI-AI.ARPA AIList Digest V4 #72
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 Apr 86 05:49:51 PST
Date: Tue 8 Apr 1986 22:08-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #72
To: AIList@SRI-AI
AIList Digest Wednesday, 9 Apr 1986 Volume 4 : Issue 72
Today's Topics:
Psychology - Computer Emotions
----------------------------------------------------------------------
Date: 29 Mar 86 19:27:23 GMT
From: hplabs!hao!seismo!harvard!bu-cs!bzs@ucbvax.berkeley.edu (Barry Shein)
Subject: Re: Computer Dialogue
Re: should computers display emotions
I guess a question I would be more comfortable with is "would people
be happier if computers mimicked emotions". Ok, from experience we
see that people don't love seeing messages like "Segmentation Violation --
Core Dumped" (although some of us for different reasons.)
Would they be 'happier' if it said 'ouch'? Well, probably not, but the
question probably comes down to more of a human-engineering machine
interface issue.
We certainly got somewhat ridiculous at one extreme (we being systems
people not unlike myself, maybe not you) with things like:
IEF007001 PSW=001049FC 0E100302
pretending to be error messages, let's face it, that's no less artificial
(and barely more useful unless you have a manual in hand and know how
to use that manual and know how to understand that manual, often the
manual was written by the same sort of brain that thought IEF007001
was helpful) than 'ouch'. We (again, we system types) have just come
to accept that sort of cruft as being socially correct (at least not
embarrasing as we might feel if we put 'ouch' into our O/S err routines).
The macintosh displays a a frowning face when it's real unhappy, most
people I know chuckled once and then remarked "that's really stupid,
how about some useful info jerks?" (like IEF007001?) I wouldn't be the
least bit surprised to hear that those smiley/frowney macs lost them
heaps of sales (we can't have CUTE on the CEO's desk...give me IEH007001!)
I think we keep straddling some line of appearing real professional
(IEF007001) vs terminal cutesiness (ouch.) I suppose there is a huge
middle ground with some dialogue (like computer dialogues).
-Barry Shein, Boston University
------------------------------
Date: 31 Mar 86 23:36:03 GMT
From: decvax!hplabsb!marvit@ucbvax.berkeley.edu (Peter Marvit)
Subject: Re: Computer Dialogue
> Mark Davis asks if computers have anything akin to human feelings.
>
> Barry Kort responds with a wonderful description of a gigantic telephone
switching system and draws a powerful parallel with its sensors and
resulting information about physical problems and the very human sense of
pain.
A friend of mine and I were discussing a similar point. If a computer
were able to tell us "what it is like to be a computer," would it be considered
concious? That is, what would be our nomenclature for a system which could
describe its innards and current state (and possibly modify some of itself -
perhaps by taking "home remedies").
My friend is a philosopher and I am a computerscientist/humanist (admittedly
an oxymoron at times). I contend conciousness is a slippery term which I
find uncomfortable. Further, existing computer systems exhibit such behavior,
albeit in a somewhat crude and unsophiticated fashion (see "df" or "fsck").
Barry gave another excellent example, cited above.
However, the question is still a valid one- if one looks beyond the operational
issues and poses the more subtle philosophical query: What is it like to "be"
anything and what would/could a computer say about itself? At one point,
I argued that the question may be completely outside the computer's world view.
That is, it would be like asking a five year old what sex feels like (please,
no flames about sophisticated tykes). The computer wouldn't have the vocabu-
lary or internal model to be able to answer that. Yet, if we programmed
that capability in ...
I look forward to your thoughts on the net or to me.
Peter Marvit ...!hplabs!marvit
Hewlett-Packard Laboratories
------------------------------
Date: 31 Mar 86 14:53:58 GMT
From: nike!riacs!seismo!cit-vax!trent@ucbvax.berkeley.edu (Ray Trent)
Subject: Re: re: Computer Dialogue #1
In article <2345@jhunix.UUCP> ins←akaa@jhunix.UUCP (Ken Arromdee) writes:
>>toasters do"... doesn't mean that a combination of many toasters cannot, and
>You are actually quite correct. There's one problem here. Toasters can store
>perhaps two or three bytes of information. Consider how many toasters
Correct me if I'm wrong, but my understanding of the currently
dominant theory about the way human beings remember things says
that brain store NO "bytes" of information at all, but that
memory is a congregate effect generated by the ←interconnections←
of the brain cells.
The only papers I have read on this subject are by John Hopfield
here at Caltech. Does anyone out there have any pointers to good
research (people or papers) being done in this field? (have
email, will summarize)
I am particularly interested by this subject because I have seen
simple programs that simulate the connection matrix of a simple
neural network. This program can "remember" things in a
connection matrix, and then "recall" them at a later time given
only pieces of the original data. Sample session:
% learn "Ross" "Richard" "Sandy"...
% ask "Ro"
Ross
% ask "Ri"
Richard
% ask "R"
Rqchird
Note the program's reaction to an ambiguous request; it
extrapolated from what it "knew" to a reasonable guess at a "real
memory" (note that 'i' + 8 = 'q' and 'a' + 8 = 'i' so the memory
was correct up to 1 bit in each of two places.)
The interesting thing about this sort of scheme is its reaction
to failed active elements. If you destroy (delete) several
locations in the connection matrix, the program doesn't lose any
specific knowledge, but it becomes harder for it to extrapolate
to the "real memory" and distinguish these from "spurious
memories." Of course, after a certain point...things break down
completely, but it's still interesting.
"In a valiant attempt to save the universe, his large intestine
leapt out of his body and throttled him..."
(if you don't understand that, ignore it.)
--
../ray\..
(trent@csvax.caltech.edu)
"The above is someone else's opinion only at great coincidence"
------------------------------
Date: Wed 2 Apr 86 17:41:51-PST
From: GARVEY@SRI-AI.ARPA
Subject: Re: Computer Dialogue
Why don't you try to define what you mean by "feel?" If you get
beyond a definition based on fairly mechanistic principles, then you
have a discussion; if you don't, then your computer will probably be
shown (uninterestingly) to feel by definition. I think it's koans
like this (assuming it isn't an April Fool joke) that keep the Dreyfi
in business and that suggest that the field needs serious tightening.
If the computer should "feel" anything, why should you assume that it
feels bad when it doesn't seem to be working correctly? Perhaps it's
taking a vacation; probably it hates people and loves to make them
mad.
Cheers,
Tom
------------------------------
Date: 1 Apr 86 12:53:34 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.berkeley.edu (B.KORT)
Subject: Re: Computer Dialogue
Peter Marvit asks if computers can have anything akin to consciousness
or self-awareness similar to humans. Excellent question.
One thing that computers *can* have is simulation models of other
systems. The National Weather Bureau's computers have a model
of atmospheric dynamics that tracks the evolution of weather patterns
with sufficient accuracy that their forecasts are at least useful,
if not perfect.
NASA and JPL (Jet Propulsion Laboratory) have elaborate computer
models of spacecraft behavior and interplanetary ballistics, which
accurately track the behavior and trajectory of the real mission
hardware.
Computers can also have models of other computers, which emulate
in software the functioning of another piece of hardware.
What would happen if you gave a computer a software model of *its
own* hardware configuration and functioning? The computer could
run the model with various perturbations (e.g. faults or design
changes) and see what happened. Now suppose that the computer
was empowered to use this model in conjunction with its own
fault-detection network. The computer could diagnose many of
its own ills, and choose remedial action. It could also explore
the wisdom of possible reconfigurations or redesigns. Digital
Equipment Corporation (DEC) has an Expert System that works out
optimal configurations for their VAX line of computers. The
Expert Systems runs on....(you guessed it)... a VAX.
If a computer can have a reliable model of itself, and can use
that model to maintain and enhance its own well-being, are we
very far away from rudimentary consciousness?
For some delightful and delicious reading on computer self-awareness,
the meaning of the word "soul", and related philosophical musings,
I recommend ←The Mind's I←, composed and arranged by Douglas Hofstadter
and Daniel Dennett.
--Barry Kort ...ihnp4!houxm!hounx!kort
------------------------------
Date: Sat, 5 Apr 86 14:46:35 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: A Byte of Toast.
Quoted in Vol 4 # 62 :-
``Our brains are enormously complex computers''.
If so, then do we all run the same operating system?
And what are the operating systems of toasters?
Gordon Joly,
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
------------------------------
Date: 7 Apr 86 03:09:16 GMT
From: ulysses!mhuxr!mhuxt!houxm!whuxl!whuxlm!akgua!gatech!seismo!rochester
!rocksanne!sunybcs!ellie!colonel@ucbvax.berkeley.edu
Subject: Re: what's it like (TV dialogue #1)
Reporter: "Mr. Computer, what's it like to be a computer?"
Computer: "Well, it's hard to explain, Frank, ..."
Reporter: "For example, what's it like to be able to read a magtape
at 6250 bpi?"
Computer: "It feels just great, Frank. Really great."
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: csdsicher@sunyabva
------------------------------
End of AIList Digest
********************
∂09-Apr-86 0826 LAWS@SRI-AI.ARPA AIList Digest V4 #73
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 Apr 86 08:26:02 PST
Date: Tue 8 Apr 1986 22:14-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #73
To: AIList@SRI-AI
AIList Digest Wednesday, 9 Apr 1986 Volume 4 : Issue 73
Today's Topics:
Psychology - Survival Instinct & Emotions
----------------------------------------------------------------------
Date: 2 Apr 86 10:23:18 GMT
From: ulysses!mhuxr!mhuxt!houxm!whuxl!whuxlm!akgua!gatech!seismo!ll-xn
!mit-amt!mit-eddie!psi@ucbvax.berkeley.edu
Subject: Re: Computer Dialogue
Hi:
Before the recent tragedy, there had been a number of
instances where the space shuttle computers aborted the mission in the
final seconds before launch. My explanation for this was that the
on-board computers were displaying a form of 'programmed survival
instinct.' In short: they were programmed to survive, and if the
launch had continued, they might not have.
Almost everyone I told explained this to back then was
incredulous. "You don't actually ←believe← that the computer wanted
to survive, do you?" was a typical comment. I feel this brings out an
important point, though, which deals with simulation, feelings, and
our understanding of The Real Thing.
On a computer, simulating an event and the actual event may be
indistinguishable. (This does not mean, as one of my friends
believed, that in a computer simulation of a hurricane, the simulated
victims of the storm would be rained upon by square-root symbols.;-))
For example, if a computer can run programs in the language Lisp and
we then write a simulator for the language CLU in Lisp, then the
computer can actually run programs in CLU.
Now, what does this mean for feelings? Well, I won't go that
far, but I would assert that a 'survival instinct' is a much simpler
thing that can be simulated on a computer. The space shuttle
computers could be thought of as programmed to survive, in just the
same way that evolution has programmed animals to survive. No
consciousness is necessary(yet), just a goal and a means to that goal.
It should be noted that the means of continuing survival available to
the space shuttle computers are very minimal right now, but even
animals must draw upon a limited set of defenses in order to survive.
The successes in AI so far have been in very restricted areas,
to say the least. Certain well-understood human abilities have been
simulated on computers. Where the ability is less understood, like
that of a chess master, the simulation breaks down. Where something
such as 'survival' may be understood, I challenge anyone to come up
with a generalized theory of 'feelings.'
A final point: whenever we understand something, it loses its
magical properties for us. If, for example, we observe the complex
behavior of some program, we may be amazed. When we look at the
sources and see how it works, however, we will probably feel that
there really is no magic there, and that we could have written the
program ourselves. The same could be true of parts of the mind
which we understand. The simpler facilities, like an instinct to
survive may seem obvious, while others, such as the feeling of love
may yet seem mystical. Maybe someday we will come to understand even
that and be able to program it into computers.
Ultimately Yours,
Joseph J. Mankoski ***PSI***
{decvax!genrad, allegra, ihnp4}!mit-eddie!psi
psi@mit-ai.ARPA
In the fullness of time even parallel lines will meet.
------------------------------
Date: 3 Apr 86 20:43:57 GMT
From: hplabs!hao!seismo!umcp-cs!venu@ucbvax.berkeley.edu (Venugopala
R. Dasigi)
Subject: Re: Computer Dialogue
In article <1439@mit-eddie.MIT.EDU> psi@mit-eddie.UUCP writes:
>thing that can be simulated on a computer. The space shuttle
>computers could be thought of as programmed to survive, in just the
>same way that evolution has programmed animals to survive. No
>consciousness is necessary(yet), just a goal and a means to that goal.
>It should be noted that the means of continuing survival available to
>the space shuttle computers are very minimal right now, but even
>animals must draw upon a limited set of defenses in order to survive.
To me it appears that the ability to dynamically redefine the goal in a
context-sensitive manner is also an important characteristic of the
"survival instinct". While animals seem to have this ability, programming
this ability into computers (in the same sense as in the case of animals) is
perhaps very difficult.
--- Venu
Venugopala Rao Dasigi
UUCP : {seismo,allegra,brl-bmd}!umcp-cs!venu
CSNet : venu@umcp-cs
ARPA : venu@mimsy.umd.edu
US Mail: Dept. of CS, Univ. of Maryland, College Park MD 20742.
------------------------------
Date: 7 Apr 86 03:15:06 GMT
From: ulysses!mhuxr!mhuxt!houxm!whuxl!whuxlm!akgua!gatech!seismo!rochester
!rocksanne!sunybcs!ellie!colonel@ucbvax.berkeley.edu
Subject: Re: survival instinct
It depends on what you mean by "wanted." Even rocks are programmed to
survive--they're hard. (The soft ones become dirt: survival of the fittest!)
"This rock, for instance, has an I.Q. of zero. Ouch!"
"What's the matter, Professor?"
"It bit me!"
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: csdsicher@sunyabva
------------------------------
Date: 5 Apr 86 13:51:18 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.berkeley.edu (B.KORT)
Subject: Re: Computer Dialogue
Joseph Mankoski writes a thought provoking article on whether
survival logic in NASA computers has any connection to human
survival instincts wired into to our brains from birth.
I have been pondering this question myself. It seems to me
that I have some autonomic responses to threat situations
which appeear to be wired-in instincts. I note that I don't rely
on them often. Most times, I rely on learned behavior to handle
situations which might have called for fight/flight/freeze if
I were living as a hunter-gatherer on the Savannahs some 20,000
years ago.
Joseph asks for a theory of feelings. As it happens, I just wrote
a brief article on the subject, which may or may not be suitable
for publication after editorial comment and revision. Just for
the hell of it, let me append the article and solicit comments
from netters interested in this topic.
==================== Article on Feelings ========================
A Simplified Model of the Effects of Perceived Aggression
in the Work Environment
Barry Kort
Copyright 1986
Introduction
The work environment offers a mix of personalities. In this
paper, I would like to examine the effects of one dimension
along which personalities are perceived to differ, and trace
the consequential effects. I would like to focus attention
on the dimension
aggressive...assertive...politic...nonassertive...nonaggressive.
The effects that I wish to investigate are not the
behavioral responses, but the more fundamental internal body
sensations or somatic reactions which lie behind the
subsequent behavioral response. The goal of this
investigation is to discover the biological roots of somatic
reactions to stressors in the work environment, and develop
a useful model of the underlying dynamics. I make no claims
that the model constructed here is complete or
comprehensive. To do so is beyond my ken. Rather, I have
attempted to construct a first crude model, which despite
it's simplicity, can be advantageously applied to ameliorate
a few of the ills that we encounter in the work environment.
A Model of Nature of Aggressive Behavior
It has been said that civilization is a thin veneer.
Underneath our legacy of some 5000 years of civilization
lies our evolutionary past. Deep within the human brain one
can find the vestiges of our animal nature-the old mammalian
brain, the old reptilian brain. Of principal interest here
are two groups of structures responsible for much of our
"wired-in" instincts.
The cerebellum is responsible for much of our risk-taking,
self-gratifying drives, including the aggressive sex drives.
It is the cerebellum that says, "Go for it! This could be
exciting! Damn the torpedoes, full speed ahead."
The limbic system, on the other hand, is responsible for
self-protective behaviors. The limbic system perceives the
threats to one's safety or well-being, and initiates
protective or counter measures. The limbic system says,
"Hold it! This could be dangerous! We'd better go slow and
avoid those torpedoes."
Rising above it all resides the neocortex or cerebrum. This
is the "new brain" of homo sapiens which is the seat of
learning and intelligence. It is the part that gains
knowledge of cause and effect patterns, and overrules the
myopic attitude of the cerebellum and limbic system.
Occasionally, the cerebral cortex is faced with a novel
situation, where past experience and learning fail to
provide adequate instruction in how to proceed. In that
case, the usual patterns of regulation are ineffective,
and the behavioral response may revert back to the more
primitive instincts.
Whether or not the cerebral cortex carries the day, the
messages of the cerebellum and limbic system ricochet
through the nervous system, leaving their signature here and
there. In the next section, we explore how these messages
manifest themselves in somatic sensations, commonly known as
feelings.
Somatic Reactions to Stress
When an individual is presented with an unusual situation,
the lack of an immediately obvious method of dealing with it
may lead to an accumulation of stress which manifests itself
somatically. For instance, first-time jitters may show up
as a knotting of the stomach (butterflies), signaling fear
(of failure). A perceived threat may cause increased heart
rate, sweating, or a tightening of the skin on the back of
the neck. (This latter phenomenon is commonly known as
"raising of one's hackles," which in birds, causes the
feathers to stand up in display mode, warning off the
threatening invader.) Teeth clenching, which comes from
repressing the urge to express anger, leads to a common
affliction among adult males-temporal mandibular joint
(TMJ). Leg shaking and pacing indicate a subliminal urge to
flee, while cold feet corresponds to frozen terror (playing
'possum). All of these are variations on the
fight/flight/freeze instincts mediated by the limbic system.
They often occur without our conscious awareness. Another
reaction is migraine headaches which arise when one is vexed
by the situation at hand, and is searching without success
for a rational solution. A person's awareness of and
sensitivity to such somatic feelings may affect his mode of
expression. The somasthetic cortex is the portion of the
brain where the body stresses are registered, and this
sensation may be the primary indication that a stressor is
present in the environment. A challenge for every
individual is to accurately identify which environmental
stimulus is linked to which somatic response.
Somatic responses such as those outlined above are
intimately connected with our expressed feelings, which
usually are translated into some behavioral response along
the axis from aggressive to assertive to politic to
nonassertive to nonaggresive. The challenge is to find and
effectuate the middle ground between too much communication
and too little. The goal of the communication is to
identify the cause and effect link between the environmental
stressor and the somatic reaction, and from the somatic
reaction to the behavioral response. The challenge is all
the more difficult because the most effective mode and
intensity of the communication depends on the maturity of
the other party.
Acknowledgements
The original sources for the ideas assembled in this paper
are too diffuse to pinpoint with completeness or precision.
However, I would like to acknowledge the influence of so
many of my colleagues who took the time to contribute their
ideas and experiences on the subject matter. I especially
would like to thank Dr. John Karlin, Dr. R. Isaac Evan, and
Dr. Laura Rogers who helped me shape and test the models
presented here.
=========================================================================
Comments are invited.
--Barry Kort ...ihnp4!houxm!hounx!kort
------------------------------
End of AIList Digest
********************
∂10-Apr-86 0211 LAWS@SRI-AI.ARPA AIList Digest V4 #74
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Apr 86 01:41:54 PST
Date: Wed 9 Apr 1986 23:04-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #74
To: AIList@SRI-AI
AIList Digest Thursday, 10 Apr 1986 Volume 4 : Issue 74
Today's Topics:
Policy - Discussion Style & Professional Ethics & Press Releases,
Programming Languages - LetS Lisp Loop Notation
----------------------------------------------------------------------
Date: Thu, 3 Apr 86 11:52:18 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: Less on IQ tests for Computers, more on Editorial Policy?
Scott Preece asks in Vol 4 # 66 :-
``Do we really want this list to be a battleground for unsubstantiated
personal opinions on the potential for machine intelligence?'' Agreed
that this a moderated digest, it is interesting to note that the net.ai
forum is currently carrying a discussion of the cognitive (and emotional)
abilities of an arbitrarily large number of toasters. Here is an example:-
> In article <2345@jhunix.UUCP> ins←akaa@jhunix.UUCP (Ken Arromdee) writes:
> >You are actually quite correct. There's one problem here. Toasters can
> >store perhaps two or three bytes of information. Consider how many
> >toasters would be required to be as complex as a human brain.
> >
> >And as for the future toasters, toasters' primary function is to affect
> >items of a definite physical size (toast).
> >--
> >Kenneth Arromdee
>
> Gee, I always thought that toasters' primary function was to affect
> items of a definite physical size (bread).
> --
>
> When you meet a master swordsman,
> show him your sword.
> When you meet a man who is not a poet,
> do not show him your poem.
> - Rinzai, ninth century zen master
>
> --Nathan Hess
> uucp: {allegra, ihnp4}!psuvax1!gondor!hess
> csnet: hess@penn-state.CSNET
> Bitnet: HESS@PSUVAXG.BITNET
I would also like to extract this from the List←of←Lists :-
> Contributions may be anything from tutorials to rampant speculation. In
> particular, the following are sought:
> Abstracts Reviews
> Lab Descriptions Research Overviews
> Work Planned or in Progress Half-Baked Ideas
> Conference Announcements Conference Reports
> Bibliographies History of AI
> Puzzles and Unsolved Problems Anecdotes, Jokes, and Poems
> Queries and Requests Address Changes (Bindings)
The poetry of Rinzai is illuminating, cf Vol 4 # 50,53, and very apt.
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
[I am unable to follow the logic of this message, but find it
easier (and faster!) to let it pass than to engage in editorial
debate with Gordon. Contributors should note that it is they,
not I, who control the quality of AIList. My thanks to you
all; keep up the good work. -- KIL]
------------------------------
Date: Fri, 4 Apr 86 12:53:55 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: World Times, April 1, 2284.
A special analysis of the entries in the AI Digests of the
mid 1980's has shown that all the entries written by "The
Joka" were the products of an automated intelligent system.
This result is regarded by some as an interesting twist on
the Turing test.
Other News.
Today the World's first trial by computer was held. The jury
consisted of 12 independent intelligent systems and they sat
at the World Court in the U.N. The jury returned it's first
verdict after a few seconds, and the judge commented on the
impartiality of the jurors, unclouded by any emotion or form
of prejudice. On trial was the off-world outlaw, Roy Baty...
Reporter : PiQuan.
------------------------------
Date: Thu, 3 Apr 86 9:38:45 CST
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Professional ethics.
Over the past several months I have been receiving the AIList, and
I must take this time to express some concerns of mine. I have seen
several "policy notices" and debates raging where the authors have
lowered themselves to the level of the "ad homina"(sp?) attack.
One should have more substantive comments if one wishes to express
criticism, and not resort to personal attacks.
I am in no way opposed to healthy debate, even if it should become
heated. However, there seems to be some dislike, on the part of many,
of pointed criticism. I wish to admonish those who take part in
this medium of intellectual exchange to express a little more common
courtesy and professional ethic, if indeed either of these still
remain. Let's drop the name-calling.
I personally welcome criticism of AI, even if it (the criticism) may
be in left field. After all, many think we are in left field, while
we may hold that they are in left field. So, exactly where is left
field? Perhaps it is dependent on ones own position? Also, we should
remember that this is a monitored digest. I personally trust the
discretion of Ken, who I think does a good job, to weed out any
inappropriate notices. Thus, I would love to see this list continue
to announce various product and research development, whether it be
presented by a party directly involved in the development or someone
farther removed. As long as it is not out and out advertisement, I,
as well as others (I think), am interested in such postings.
Enough for now...
Glenn O. Veach
Artificial Intelligence Laboratory
Department of Computer Science
University of Kansas
Lawrence, KS 66045-2192
(913) 864-4482
veach%ukans.csnet@csnet-relay.csnet
------------------------------
Date: Wed 9 Apr 86 10:50:34-PST
From: Pat Hayes <PHayes@SRI-KL>
Subject: Re: AIList Digest V4 #70
Part of this AIlist reads perilously like an advertisement, even though it is
protected by Les Earnest's mention. Do we have to have whole 'product
descriptions' ( ie advertising brochures ) put out over the net? Isn't that
( just slightly ) illegal?
Pat Hayes
------------------------------
Date: Sun 16 Mar 86 22:01:03-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Policy - Press Releases
A press release typically contains factual information; the cost of
transmitting it is small. Is it not always in the government's
interest for me to pass on the information to those who may need it
rather than to censor it (to avoid annoying those who don't)?
Early net organizers were no doubt [rightly] worried about corporate
PR departments broadcasting unwanted press releases to everyone on
the net. The situation has changed. A press release judged appropriate
for a narrow-topic discussion list by its moderator is unlikely to
offend many (other than self-appointed censors) or to seriously
waste the time of the list members. It will not mislead readers so long
as it is clearly marked as a commercial message. The inherent bias of
such messages is mitigated by the opportunity for immediate rebuttal
and for submission of equally biased messages supporting other views.
Any resulting controversy sparks interest and keeps the list active.
Outright flaming or numbing repetition can be prevented by the moderator.
If the moderator fails to intervene, comments from disgruntled readers
will fill his (or her) mailbox and eventually become a metadiscussion
within the list itself. Readers who get tired of all this can drop out.
My view is that policy on commercial content (hardware hype, job ads,
prices, whatever) within a discussion list should be set by the
moderator and the list members -- not by conventions required for
unmoderated message streams. The Arpanet administrators and host
administrators will always hold the trump, of course; they can refuse
to support any list that violates >>their<< standards.
-- Ken Laws
------------------------------
Date: Fri, 28 Mar 1986 15:56 EST
From: Dick@MC.LCS.MIT.EDU
Subject: LetS -- a new Lisp loop notation
[Forwarded from the MIT bboard by Laws@SRI-AI.]
This message advertises a Common Lisp macro package called LetS (rhymes with
process) which it is hoped will become a standard iteration facility in Common
Lisp. LetS makes it possible to write a wide class of algorithms which are
typically written as loops in a functional style which is similar to
expressions written with the Common Lisp sequence functions. LetS supports a
number of features which make LetS expressions more expressive than sequence
expressions. However, the key feature of LetS is that every LetS expression is
automatically transformed into an efficient iterative loop. As a result,
unlike sequence expressions, LetS expressions are just as efficient as the
traditional loop expressions they replace.
An experimental version of LetS currently exists on the MIT-AI machine in the
file "DICK;LETS BIN". Although LetS is written in Common Lisp, it has not yet
been tested on anything other than a Symbolics Lisp Machine. For various
detailed reasons it is unlikely to run on any other machine. Everyone who
wants to is invited to borrow this file and try LetS out. I am very
interested to hear any and all comments on LetS.
Extensive documentation of LetS is in the file "DICK;LETSD >" also on the
MIT-AI machine. Even people who do not have a Lisp Machine or are not able
to access the code are invited to read this documentation and make comments on
it. I am interested in getting as wide a feedback as possible. If you cannot
access the documentation file directly, send me your US mail address and I will
mail you a copy. The documentation is much too long to reliably send via
computer mail.
After an initial testing and feedback period, a final version of LetS which
runs under all Common Lisps will be created along with formal documentation.
This should happen within a couple of months.
A very brief summary of lets is included at the end of this message.
Dick Waters
The advantages (with respect to conciseness, readability, verifiability and
maintainability) of programs written in a functional style are well known. A
simple example of the clarity of the functional style is provided by the
Common Lisp program below. This function computes the sum of the positive
elements of a vector.
(defun sum-pos-vect (v)
(reduce #'+ (remove-if-not #'plusp v)))
A key feature of sum-pos-vect is that it makes use of an intermediate
aggregate data structure (a sequence) to represent the selected set of vector
elements. The use of sequences as intermediate quantities in computations
makes it possible to use functional composition to express a wide variety of
computations which are usually represented as loops. Unfortunately, as
typically implemented, sequence expressions are extremely inefficient.
The problem is that straightforward evaluation of a sequence expression
requires the actual creation of the intermediate sequence objects. Since
alternate algorithms using loops can often compute the same result without
creating any intermediate sequences, the overhead engendered by using sequence
expressions is quite reasonably regarded as unacceptable in many situations.
A solution to the problem of the inefficiency of sequence expressions is to
transform them into iterative loops which do not actually create any
intermediate sequences before executing them. For example, sum-pos-vect might
be transformed as shown below.
(defun sum-pos-vect-transformed (v)
(prog (index last sum element)
(setq index 0)
(setq last (length v))
(setq sum 0)
L (if (not (< index last)) (return sum))
(setq element (aref v index))
(if (plusp element) (setq sum (+ element sum)))
(setq index (1+ index))
(go L)))
Several researchers have investigated the automatic transformation of
sequence expressions into loops. For example, APL compilers transform many
kinds of sequence expressions into loops.
Unfortunately, there is a fundamental problem with the transformation of
sequence expressions into loops. Although many sequence expressions can be
transformed, many cannot. For example, Common Lisp provides a sequence
function (reverse) which reverses the elements in a sequence. Suppose that a
sequence expression enumerates a sequence, reverses it, and then reduces it to
some value. This sequence expression cannot be computed without using
intermediate storage for the enumerated sequence because the first element of
the reversed sequence is taken from the last element of the enumerated
sequence. There is no way to transform the sequence expression into an
efficient loop without eliminating the reverse operation.
A solution to the problems caused by the presence of non-transformable
sequence operations is to restrict the kinds of sequence operations which
are allowed so that every sequence expression is guaranteed to be
transformable. For example, one could start by outlawing the operation
reverse.
!
LETS
LetS supports a wide class of sequence expressions that are all guaranteed
to be transformable into efficient loops. In order to avoid confusion with
the standard Common Lisp data type sequence, the data type supported by LetS
is called a series.
Using LetS the program sum-pos-vect would be rendered as shown below. The
function Evector converts the vector v into a series which contains the same
elements in the same order. The function Tplusp is analogous to
(remove-if-not #'plusp ...) except that it operates on a series. The function
Rsum corresponds to (reduce #'+ ... :initial-value 0) except that it takes in
a series as its argument.
(defun sum-pos-vect-lets (v)
(Rsum (Tplusp (Evector v))))
LetS automatically transforms the body of this program as shown below. The
readability of the transformed code is reduced by the fact that it contains a
large number of gensymed variables. However, the code is quite efficient.
The only significant problem is that too many variables are used. (For
example, the variable #:vector5 is unnecessary.) However, this problem need
not lead to inefficiency during execution as long as a compiler which is
capable of simple optimizations is available.
(defun sum-pos-vect-lets-transformed (v)
(let (#:index12 #:last4 #:sum21 #:element11 #:vector5)
(tagbody (setq #:vector5 v)
(setq #:index12 0)
(setq #:last4 (length #:vector5))
(setq #:sum21 0)
#:p0 (if (not (< #:index12 #:last4)) (go #:e9))
(setq #:index12 (1+ #:index12))
(setq #:element11 (aref #:vector5 #:index12))
(if (not (plusp #:element11)) (go #:p0))
(setq #:sum21 (+ #:element11 #:sum21))
(go #:p0)
#:e9)
#:sum21))
RESTRICTIONS ENFORCED BY LETS
The key aspect of LetS is that it enforces a palatable (and not overly
strict) set of easily understandable restrictions which guarantee that every
series expression can be transformed into a highly efficient loop. This
allows programmers to write series expressions which are much easier to work
with than the loops they might otherwise write, without suffering a decrease
in efficiency.
There are two central restrictions which are enforced by LetS. First, every
series must be statically identifiable so that transformation can occur at
compile time rather than at run time. Second every series function is
required to be "in-order". A series function is said to be in-order if it
reads each input series in order, one element at a time, starting from the
first one, and if it creates the output series (if any) in order, one element
at a time, starting from the first one. In addition, the function must do
this without using internal storage for more than one element at a time for
each of the input and output series. For example, the series functions
Evector, Tplusp, and Rsum are all in-order. In contrast, the function reverse
is not in-order. (Reverse either has to read the input in reverse order, or
save up the elements until the last one is read in.)
!
OTHER FEATURES OF LETS
Although efficiency is the main goal of LetS, LetS supports a number of
features which are not directly related to efficiency per se. Most notable of
these is implicit mapping of functions over series. Whenever an ordinary Lisp
function is syntactically applied to a series, it is automatically mapped over
the elements of the series.
The following example illustrates implicit mapping. In the function below,
the computation "(lambda (x) (expt (abs x) 3))" is implicitly mapped over the
series of numbers generated by Evector. Implicit mapping of this sort is a
commonly used feature of APL and is extremely convenient.
(defun sum-cube-abs-vect (v)
(Rsum (expt (abs (Evector v)) 3)))
(sum-cube-abs-vect #(1 -2 3)) => (+ 1 8 27) => 36
New series functions can be defined by using the form defunS. The following
example shows how the function Rsum could be defined. More complex forms can
be defined by using the ordinary Common Lisp macro definition facilities to
define macros which create appropriate series expressions.
(defunS Rsum (numbers)
(declare (series numbers))
(reduceS #'+ 0 numbers))
LetS provides two forms (LetS and LetS*) which are analogous to let and
let*. As shown in the example below, These forms can be used to bind both
ordinary variables (e.g., num-obs, mean, and deviation) and series variables
(e.g., ob). Whether or not a variable is a series is determined
by looking at the type of value produced by the expression which computes
the value bound to it.
(defun mean-and-deviation (observations)
(letS* ((ob (Elist observations))
(num-obs (Rlength ob))
(mean (/ (Rsum ob) num-obs))
(deviation (- (/ (Rsum (expt ob 2)) num-obs) (expt mean 2))))
(list mean deviation)))
The complete documentation of LetS compares LetS with the Common Lisp
sequence functions and with the Zeta Lisp Loop macro. LetS supports
essentially all of the functionality of the Loop macro in a style which looks
like sequence functions and which is exactly as efficient as the loop macro.
THE ANCESTRY OF LETS
The LetS package described here is descended from an earlier package of the
same name (See MIT/AIM-680a and "Expressional Loops", Proc. Eleventh ACM
SIGACT-SIGPLAN Symposium on the Principles of Programming Languages, January
1984). The current system differs from the earlier system in a number of
ways. In particular, the new system supports a much wider set of features.
------------------------------
End of AIList Digest
********************
∂10-Apr-86 0441 LAWS@SRI-AI.ARPA AIList Digest V4 #75
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Apr 86 04:39:02 PST
Date: Wed 9 Apr 1986 23:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #75
To: AIList@SRI-AI
AIList Digest Thursday, 10 Apr 1986 Volume 4 : Issue 75
Today's Topics:
Games - Game-Playing Programs,
Philosophy - Computer Consciousness & Wittgenstein and NL &
Reply to Lucas on Formal Systems
----------------------------------------------------------------------
Date: Wed, 09 Apr 86 11:54:48 -0500
From: lkramer@dewey.udel.EDU
Subject: Game-Playing Programs
Re: Allen Sherzer's request for information of AI game-playing
programs.
I wrote a program last year for an expert systems course that plays
the card game Spades. (ESP -- Expert Spades Player) It is implemented
as a frame-based expert system written in minifrl (my revision of the
frame primitives in Winston and Horn's Lisp) on top of Franz. The pro-
gram is fairly simple-minded in that it doesn't learn from its mistakes
or deal well with novel situations, but it still is able to play a fairly
good game of Spades.
In addition, since it is written as an expert system, its rule-base is
easily modifiable.
Mostow has written a (much more sophisticated) program that plays Hearts
and is able to operationalize from fairly general advice.
--1983, Mostow, D.J., Machine transformation of advice into a heuristic
search procedure. In R.S. Michalski, J. Carbonell, and T. M.
Mitchell, eds., Machine learning: An artificial Intelligence
Approach. Tioga Press.
------------------------------
Date: 9 Apr 86 08:55:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: computer consciousness
Thought I'd jump in here with a few points.
1. There's a metaphilosophers (don't ask me why the "meta") mailing
list where folks thrash on about this stuff constantly, so if you
care, listen in. Tune in to: MetaPhilosophers%MIT-OZ@MIT-MC.
2. There's a common problem with confusing epistemological questions
(what would constitute evidence for computer consciousness) and
ontological ones (so, is it *really* conscious). Those who
subscribe to various verificationist fallacies are especially
vulnerable, and indeed may argue that there is ultimately
no distinction. The point is debatable, obviously, but we
shouldn't just *assume* that the latter question (is it *really*
conscious) is meaningless unless tied to an operational definition.
After all, conscious experience is the classic case of a
*private* phenomenon (ie, no one else can directly "look" at your
experiences). If this means that consciousness fails a
verificationist criterion of meaningfulness, so much the worse
for verificationism.
3. Taking up the epistemological problem for the moment, it
isn't as obvious as many assume that even the most sophisticated
computer performance would constitute *decisive* evidence for
consciousness. Briefly, we believe other people are conscious
for TWO reasons: 1) they are capable of certain clever activities,
like holding English conversations in real-time, and 2) they
have brains, just like us, and each of us knows darn well that
he/she is conscious. Clearly the brain causes/supports
consciousness and external performance in ways we don't
understand. A conversational computer does *not* have a brain;
and so one of the two reasons we have for attributing
consciousness to others does not hold.
Analogy: suppose you know that cars can move, that they all have
X-type-engines, and that there's something called combustion
which depends on X-type-engines and which is instrumental in getting
the cars to move. Let's say you have a combustion-detector
which you tried out on one car and, sure enough, it had it, but
then you dropped your detector and broke it. You're still pretty
confident that the other cars have combustion. Now you see a
very different type of vehicle which can move, but which does
NOT have an X-type-engine - in fact you're not too sure whether
it's really an engine at all. Now, is it just obvious that this
other vehicle has combustion?? Don't we need to know a) a good
definition of combustion, b) some details as to how X-type-engines
and combustion are related? c) some details as to how motion
depends on combustion, d) in what respects the new "engine"
resembles/differs from X-type-engines, etc etc.? The point is
that motion (performance) isn't *decisive* evidence for combustion
(consciousness) in the absence of an X-type-engine (brain).
John Cugini <Cugini@NBS-VMS>
------------------------------
Date: 2 Apr 86 08:58:24 GMT
From: amdcad!cae780!leadsv!rtgvax!ramin@ucbvax.berkeley.edu (Pantagruel)
Subject: Natural Language processing
An issue that has propped up now and again through my studies has been
the relation between current Natural Language/Linguistic research and
the works of Ludwig Wittgenstein (especially through the whole Vienna
School mess and later in his writings in "Philosophical Investigations").
It appears to me (in observing trends in such theories) and especially
after the big hoopla over Frames that AI/Cognitive Research has spent
the past 30 years experimenting through "Tractatus" and has just now warmed
up to "P.I." The works of the Vienna School's context-free language analyses
earlier in this century seems quite parallel to early context-free language
parsing efforts.
The later studies in P.I. with regards to the role of Natural Context and
the whole Picture-Theory rot seems to have been a direct result of the
failure of the context-free approach. Quite a few objections voiced nowadays
by researchers on the futility of context-free analysis seems to be very
similar to the early chapters in P.I.
I still haven't gone through Wittgenstein with a fine enough comb as I
would like... especially this latter batch of his notes that I saw
a few weeks ago finally published and available publicly... But I still
think there is quite a bit of merit to this fellow's study of language
and cognition.
Any opinions on this...? Any references to works to the contrary?
I must be fair in warning that I hold Wittgensteins' works to contain
the answers to some of the biggest issues facing us now... Personally, I'm
holding out for someone to come up with some relevant questions...
I think Bertrand Russell was correct in assessing L.W.'s significance...
Please mail back to me for a livelier dialogue... The Net seems rather
hostile nowadays... (but post to net if you think it merits a public forum)...
"Pantagruel at his most vulgar..."
= = =
Alias: ramin firoozye | USps: Systems Control Inc.
uucp: ...!shasta \ | 1801 Page Mill Road
...!lll-lcc \ | Palo Alto, CA 94303
...!ihnp4 \...!ramin@rtgvax | ↑G: (415) 494-1165 x-1777
= = =
------------------------------
Date: Fri, 4 Apr 86 13:34:54 est
From: Stanley Letovsky <letovsky@YALE.ARPA>
Subject: Reply to Lucas
At the conference on "AI an the Human Mind" held at Yale early in
March 1986, a paper was presented by the British mathematician John
Lucas. He claimed that AI could never succeed, that a machine was in
principle incapable of doing all that a mind can do. His argument went
like this. Any computing machine is essentially equivalent to a system
of formal logic. The famous Godel incompleteness theorem shows that for
any formal system powerful enough to be interesting, there are truths
which cannot be proved in that system. Since a person can see and
recognize these truths, the person can transcend the limitations of the
formal system. Since this is true of any formal system at all, a person
can always transcend a formal system, therefore a formal system can
never be a model of a person. Lucas has apparently been pushing this
argument for several decades.
Marvin Minsky gave the rebuttal to this; he said that formal
systems had nothing to do with AI or the mind, since formal systems
required perfect consistency, whereas what AI required was machines that
make mistakes, that guess, that learn and evolve. I was less sure of
that refutation; although I agreed with Minsky, I was worried that
because the algorithms for doing all that guessing and learning and
mistake making would run on a computer, there was still a level of
description at which the AI model must look like a consistent formal
system. This is equivalent to the statement that your theory of the
mind is a consistent theory. I was worried that Lucas could revive his
argument at that level, and I wanted a convincing refutation. I have
found one, which I will now present.
First, we need to clarify the relationship between a running
computer program and a system of formal logic. A running computer
program is a dynamic object, it has a history composed of a succession
of states of the machine. A formal system, by contrast, is timeless:
it has some defining axioms and rules of inference, and a space of
theorems and nontheorems implicitly defined by those axioms and rules.
For a formal system to model a dynamic process, it must describe in its
timeless manner the temporal behavior or history of the process. The
axioms of the formal system, therefore, will contain a time parameter.
They might look something like this:
if the process is in a state of type A at time t1,
it will be in a state of type B in the next instant.
A more complicated problem is how the interaction between the
computer program and the outside world is to be modelled within the
formal system. You cannot simulate input and output by adding axioms to
the formal system, because changing the axioms changes the identity of
the system. Moreover, input and output are events in the domain of the
running program; within the formal system they are just axioms or
theorems which assert that such and such an input or output event
occurred at such and such a time. The ideal solution to this problem is
to include within the formal system a theory of the physics of the world
as well as a theory of the mind. This means that you can't construct a
theory of the mind until you have a theory of the rest of the universe,
which seems like a harsh restriction. Of course, the theory of the rest
of the universe need not be correct or very detailed; an extremely
impoverished theory would simply be a set of assertions about sensory
data received at various instants. Alternatively, you could ignore I/O
completely and just concern yourself with a model of isolated thought;
if we debunk Lucas' argument for this case we can leave it to him to
decide whether to retreat to the high ground of embodied thinking
machines. Therefore I will ignore the I/O issue.
The next point concerns the type of program that an AI model of
the mind is likely to be. Again, ignoring sensory and motor processing
and special purpose subsystems like visual imagery or solid modelling,
we will consider a simple model of the mind as a process whose task is
belief fixation. That is, the job of the mind is to maintain a set of
beliefs about the world, using some kind of abductive inference
procedure: generate a bunch of hypotheses, evaluate their credibility
and consistency using a variety of heuristic rules of evidence, and, on
occasion, commit to believe a particular hypothesis.
It is important to understand that the set of beliefs maintained
by this program need not be consistent with each other. If we use the
notation
believes(Proposition,Instant)
to denote the fact that the system believes a particular proposition at
some instant, it is perfectly acceptable to have both
believes(p,i)
and
believes(not(p),i)
be theorems of the formal system which describes the program's behavior.
The formal system must be a consistent description of the behavior of
the program, or we do not have a coherent theory. The behavior of the
program must match Lucas' (or some other person's) behavior or we do not
have a correct theory. However the beliefs maintained by the program
need not be a consistent theory of anything, unless Lucas happens to
have some consistent beliefs about something.
For those more comfortable with technical jargon, the formal
system has a meta-level and an object level. The object level describes
Lucas beliefs and is not necessarily consistent; the meta-level is our
theory of Lucas' belief fixation process and had better be consistent.
The object level is embedded in the meta-level using the modal operator
"believes".
What would it mean to formulate a Godel sentence for this system?
To begin with, we seem to have a choice about where to formulate the
Godel sentence: at the object level or the meta level. Formulating a
Godel sentence for the object level, that is, the level of Lucas'
beliefs, is clearly a waste of time, however. This level is not
required to be consistent, and so Godel's trick of forcing us to choose
between consistency and completeness fails: we have already rejected
consistency.
The more serious problem concerns a Godel sentence formulated for
the meta-level, which must be consistent. The general form of a Godel
sentence is
G: not(provable(G))
where "provable" is a predicate which you embed in the system in a
clever way, and which captures the notion of provability within the
system. The meaning of such a sentence is "This sentence is not a
theorem", and therein lies the Godelian dilemma: if the sentence is
true, the system is incomplete because not all statable truths are
theorems. If the sentence is false, then the system is inconsistent,
because G is both true and false. This dilemma holds for all
"sufficiently powerful" systems, and we assume that our model of Lucas
falls into this category, and that one can therefore write down a Godel
sentence for the model.
What is critical to realize, however, is that the Godel sentence
for our model of Lucas is not a belief of Lucas' according to the model.
The form of the Godel sentence
G: not(provable(G))
is syntactically distinct from the form of an assertion about Lucas'
beliefs,
believes(p,t)
Nothing stops us from having
believes(G,t)
be provable in the system, despite the fact that G is not itself
provable in the system. (Actually, the last sentence is incorrect,
since it is illegal to put G inside the scope of the "believes"
operator. G is a meta-level sentence, and only object level sentences
are permitted inside "believes". The object level and the meta level
are not allowed to share any symbols. If you want to talk about Lucas's
beliefs about the model of himself, you will have to embed Lucas' model
of the model of himself at the object level, but we can ignore this
technicality.)
This point is crucial: the Godel sentence for our theory of Lucas
as a belief-fixing machine is not a theorem ascribing any beliefs to
Lucas. Therefore the fact that Lucas can arrive at a belief that the
Godel sentence is true is perfectly compatible with the fact that the
system cannot prove G as a theorem. Lucas' argument depends on the
claim that if he believes G, he transcends the formal system: this is
his mistake. Lucas can believe whatever he wants about what sentences
can or can't be proved within the model of himself. The only way his
beliefs have any bearing on the correctness of the model is if the model
predicts that Lucas will believe something he doesn't, or disbelieve
something he believes. In other words, the usual criteria of science
apply to judging the correctness of the model, and no Godelian sophistry
can invalidate the model a priori.
Lucas' argument has a certain surface plausibility to it. Its
strength seems to depend on the unwarranted assumption that the theorems
of the formal system correspond directly to the beliefs of the mind
being modelled by that system. This is a naive and completely
fallacious assumption: it ignores the fact that minds are temporal
processes, and that they are capable of holding inconsistent beliefs.
When these issues are taken into account, Lucas' argument falls flat.
------------------------------
End of AIList Digest
********************
∂10-Apr-86 2132 LAWS@SRI-AI.ARPA AIList Digest V4 #76
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Apr 86 21:32:02 PST
Date: Wed 9 Apr 1986 23:36-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #76
To: AIList@SRI-AI
AIList Digest Thursday, 10 Apr 1986 Volume 4 : Issue 76
Today's Topics:
Seminars - NL Interfaces to Expert Systems (Villanova) &
Minsky (SIU-Edwardsville) &
Frames and Objects in Modeling and Simulation (SU) &
Machine Inductive Inference (UPenn) &
Conditionals and Inheritance (CMU) &
Knowledge Retrieval as Specialized Inference (CMU) &
Ontology and Efficiency in a Belief Reasoner (UPenn) &
Probabilistic Inference: Theory and Practice (SMU),
Conference - Southern California AI Conference Program
----------------------------------------------------------------------
Date: Fri, 4 Apr 86 13:09 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - NL Interfaces to Expert Systems (Villanova)
I got an announcement in the mail this week about the first meeting of the
DELAWARE VALLEY AI ASSOCIATION. It will be held at Villanova University
(Tolentine Hall, room 215) on April 21st at 7:30pm. The meeting will
discuss the organizational structure of the association, introduce the
current officers, and feature a talk by Bonnie Webber on "Natural Language
Interfaces to Expert Systems".
DIRECTIONS: from rt. 320 North turn right onto route 30. At the first
light, turn right into the parking lot. Walk across route 30 and proceed
along the walkway towards the chapel. Turn left at the Chapel to Tolentine
Hall, which is about 50 yards to the right.
For more information, call 215-265-1980.
------------------------------
Date: 8 Apr 1986 13:30-EST
From: ISAACSON@USC-ISI.ARPA
Subject: Seminar - Minsky (SIU-Edwardsville)
Marvin Minsky will be in the St. Louis area on Tuesday and Wednesday,
April 22, 23. He'll give a talk at Southern Illinois University at
Edwardsville on:
THE SOCIETY OF MIND
Science Labs Bldg., Room 1105
Tuesday, 7:30 pm
April 22, 1986
Admission is free and people in the St. Louis area are welcome.
------------------------------
Date: Tue 8 Apr 86 16:27:21-PST
From: Christine Pasley <pasley@SRI-KL>
Subject: Seminar - Frames and Objects in Modeling and Simulation (SU)
CS529 - AI In Design & Manufacturing
Instructor: Dr. J. M. Tenenbaum
Title: Frames and Objects: Application to Modeling And Simulation
Speaker: Richard Fikes and Marilyn Stelzner
From: Intellicorp
Date: Wednesday, April 9, 1986
Time: 4:00 - 5:30
Place: Terman 556
We will describe the characteristic features of frame-based knowledge
representation facilities and indicated how they can provide a
foundation for a variety of knowledge-system functions. We will focus
on how frames can contribute to a knowledge sytem's reasoning
activities and how they can be used to organize and direct those
activities. Application to engineering modelling and simulation will
be discussed.
Visitors welcome.
------------------------------
Date: Tue, 8 Apr 86 12:00 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Machine Inductive Inference (UPenn)
Forwarded From: Dale Miller <Dale@UPenn> on Tue 8 Apr 1986 at 8:35
UPenn Math-CS Logic Seminar
SOME RECENT RESEARCH ON MACHINE INDUCTIVE
INFERENCE
Scott Weinstein
Tuesday, 8 April 1986, 4:30 - 6:00, 4N30 DRL
The talk will survey some recent (and not so recent) results on the inference
of r.e. sets and first-order structures.
------------------------------
Date: 8 Apr 1986 1416-EST
From: Lydia Defilippo <DEFILIPPO@C.CS.CMU.EDU>
Subject: Seminar - Conditionals and Inheritance (CMU)
Speaker: Rich Thomason
Date: Thursday, April 17
Time: 3:00 pm
Place: 4605
Topic: CONDITIONALS AND INHERITANCE
This talk will provide motivation and an overview of an
NSF-sponsored research project that has recently begun here, involving
David Touretzky, Chuck Cross, Jeff Horty, and Kevin Kelly. The portion
of the project on which I will concentrate aims at bringing logical work
on conditionals to bear on nonmonotonic reasoning, and in particular on
inheritance theory.
Some of the background for the theory consists in the need for a
qualitative approach to "belief kinematics" (or knowledge revision, or
database update), as opposed to a quantitative approach such as the
Bayesian one. The logic of conditionals provides some principles for
such an approach, where the conditionals are interpreted as indicative
expressions of willingness to make belief transitions.
Although we have many firm intuitions about inheritance in
particular cases, it is difficult to establish a correct general
definition of nonmonotonic inheritance for arbitrary semantic nets.
I will show how a definition of inheritance generates a definition
of validity for simple conditional expressions, and will suggest that
this can be used as a criterion to judge inheritance definitions.
I will present some results relating particular inheritance definitions
to conditional logics.
These results depend on a kind of ad hoc update procedure for
semantic nets. I will suggest that a better procedure might be
obtained by considering nets with both monotonic and nonmonotonic
links.
If time permits, I will develop some analogies between semantic
nets and Gentzen systems or natural deduction.
------------------------------
Date: 8 April 1986 1615-EST
From: Betsy Herk@A.CS.CMU.EDU
Subject: Seminar - Knowledge Retrieval as Specialized Inference (CMU)
Speaker: Alan M. Frisch, University of Rochester
Date: Tuesday, April 22
Time: 3:30 - 5:00
Place: 5409 Wean Hall
Title: Knowledge retrieval as specialized inference
Artificial intelligence reasoning systems commonly contain a large
corpus of declarative knowledge, called a knowledge base (KB), and
provide facilities with which the system's components can retrieve
this knowledge.
Consistent with the necessity for fast retrieval is the guiding
intuition that a retriever is, at least in simple cases, a pattern
matcher, though in more complex cases it may perform selected
inferences such as property inheritance.
Seemingly at odds with this intuition, the thesis of this talk is that
the entire process of retrieval can be viewed as a form of inference
and hence the KB as a representation, not merely a data structure. A
retriever makes a limited attempt to prove that a queried sentence is
a logical consequence of the KB. When constrained by the no-chaining
restriction, inference becomes indistinguishable from pattern-matching.
Imagining the KB divided into quanta, a retriever that respects this
restriction cannot combine two quanta in order to derive a third.
The techniques of model theory are adapted to build non-procedural
specifications of retrievability relations, which determine what
sentences are retrievable from what KB's. Model-theoretic
specifications are presented for four retrievers, each extending
the capabilities of the previous one. Each is accompanied by a
rigorous investigation into its properties, and a presentation of
an efficient, terminating algorithm that can be proved to meet the
specification.
------------------------------
Date: Wed, 9 Apr 86 15:01 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Ontology and Efficiency in a Belief Reasoner (UPenn)
Forwarded From: Bonnie Webber <Bonnie@UPenn>
Forwarded From: Glenda Kent <Glenda@UPenn>
ONTOLOGY AND EFFICIENCY IN A BELIEF REASONER
Anthony S. Maida
Department of Computer Science
Penn State University
This talk describes the implementation of, and theoretical influences
underlying, a belief reasoner called the "Belief Space Engine." A belief
reasoner is a program that reasons about the "beliefs" of other agents. The
Belief Space Engine uses specialized data structures, called belief spaces, to
compute a certain class of inferences about the beliefs of other agents
efficiently. Theoretically, the architecture is motivated by a syntactic
simulation ontology, which is an alternative to the possible-worlds ontology.
In order to encode this ontology, a meta description facility has been
implemented.
This talk is organized as follows. First, we explain the semantic difficulties
with belief reasoning that stem from interactions between belief, equality, and
quantification. Next, we argue for the sufficiency of the syntactic simulation
ontology to address the difficulties we described. Then we show how the
ontology is partially embodied in the Belief Space Engine. Finally, we show
that the Belief Space Engine is robust in this domain by programming several
examples.
Thursday, April 10, 1986
Room 216 - Moore School
3:00 - 4:30 p.m.
Refreshments Available
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Seminar - Probabilistic Inference: Theory and Practice (SMU)
Title: Probabilistic Inference: Theory and Practice
Speaker: Won D. Lee
University of Illinois at Urbana- Champaign
Location: 315SIC
Time: 2:00 PM
This talk presents a system and a methodology for probabilistic learning
from examples.
First, I present a new methodology, Probabilistic Rule Generator
(PRG), of variable-valued logic synthesis which can be applied
effectively to noisy data. Then a new system, Probabilistic
Inference, which can generate concepts with limited time and/or
resources is defined. It is discussed how PRG can be a practical tool
for Probabilistic Inference.
A departure from the classical viewpoint in logic minimization, and in
knowledge acquisition is reported.
------------------------------
Date: Wed, 9 Apr 86 19:52:30 PST
From: cottrell@nprdc.arpa (Gary Cottrell)
Subject: Conference - Southern California AI Conference Program
Southern California Conference on Artificial Intelligence
Saturday, April 26, 1986
Peterson Hall
UCSD
Sponsored by San Diego SIGART and SCAIS
9:00am Registration Desk Opens
10:00am-12:00pm Invited Overviews
10:00am-10:25am AI Environment and Research at UCLA
Michael G. Dyer and Josef Skrzypek, UCLA AI Lab
10:30am-10:55am Ai Research at USC
Peter Norvig, USC
11:00am-11:25am Parallel Distributed Processing:
Explorations in the Microstructure of Cognition
David E. Rumelhart, Institute for Cognitive Science, UCSD
11:30am-11:55am Human Computer Interaction: Research at the
Intelligent Systems Group
Jim Hollan, Intelligent Systems Group, UCSD
12:00-1:00 Buffet Lunch
1:00pm-3:00pm SCAIS Session I: Expert Systems
1:00pm-1:15pm RAMBOT: A connectionist expert system that
learns by example
Michael C. Mozer, Institute for Cognitive Science, UCSD
1:20pm-1:35pm A small expert system that learns
George S. Levy, Counseling and Consulting Associates, San Diego
1:40pm-1:55pm A knowledge based selection system
Xi-an Zhu, Dept. of Electrical Engineering, USC
2:00pm-2:15pm STYLE Counselor: An expert system to select ties
Jeffrey Blake, Peter Tenereillo, and Jeff Wicks
Department of Mathematical Sciences, SDSU
2:20pm-2:35pm A health and nutrition expert system
Marwan Yacoub, Department of Mathematical Sciences, SDSU
2:40pm-2:55pm An inexact reasoning scheme based on intervals
of probabilities
Koenraad Lecot, Computer Science Dept., UCLA
1:00pm-3:00pm SCAIS Session 2: Vision and Natural Language
1:00pm-1:15pm A Scheme-based PC vision workstation
Michael Stiber and Josef Skrzypek, CS Dept., UCLA and CRUMP Inst.
1:20pm-1:35pm Early Vision: 3-D silicone solution to
lightness constancy
Paul C. H. Lin and Josef Skrzypek, CS Dept., UCLA and CRUMP Inst.
1:40pm-1:55pm A connectionist computing architecture for
textural segmentation
Edmond Mesrobian and Josef Skrzypek, CS Dept., UCLA and CRUMP Inst.
2:00pm-2:15pm ANIMA: Analogical Image Analysis
Arthur Newman, Computer Science Dept., UCLA
2:20pm-2:35pm Representing pragmatic knowledge in lexical
memory
Michael Gasser, Artificial Intelligence Laboratory, UCLA
2:40pm-2:55pm The role of mental spaces in establishing
universal principles for the semantic interpretation of
cliches
Michelle Gross, Linguistics Dept., UCSD
1:00pm-3:00pm SIGART Session 1
1:00pm-1:25pm Using commonsense knowledge for prepositional
phrase attachment
K. Dahlgren, IBM
1:30pm-1:55pm Social Intelligence
Les Gasser, Computer Science Dept., USC
2:00pm-2:25pm A unified algebraic theory of logic and
probability
Philip Calabrese, LOGICON
2:30pm-2:55pm Learning while searching in constraint-
satisfaction problems
Rina Dechter, & Hughes AI Center Cognitive Systems Lab, UCLA
3:00-3:30 Coffee Break
3:30pm-5:30pm SCAIS Session 3: Connectionist Models & Learning
3:30pm-3:45pm Toward optimal parameter selection in the
back-propagation algorithm
Yves Chauvin, Institute for Cognitive Science, UCSD
3:50pm-4:05pm Inverting a connectionist network mapping by
back-propagation of error
Ron Williams, Institute for Cognitive Science, UCSD
4:10pm-4:25pm Learning internal representations from gray scale images
Gary Cottrell and Paul Munro, Institute for Cognitive Science, UCSD
4:30pm-4:45pm Decomposition in perceptron systems
Rik Verstraete, Computer Science Dept., UCLA
4:50pm-5:05pm Adaptive Self-Organizing Logic Networks
Tony Martinez, ***
5:10pm-5:25pm Human understanding in diverse environments
Louis Rossi, Harvey Mudd College
3:30pm-5:30pm SCAIS Session 4: Miscellaneous
(HMI, Planning, Problem Solving, Knowledge Representation)
3:30pm-3:45pm Producing coherent interactions in a tutoring system
Balaji Narasimhan, Computer Science Dept., USC
3:50pm-4:05pm AQUA: An intelligent UNIX advisor
Alex Quilici, Artificial Intelligence Laboratory, UCLA
4:10pm-4:25pm Errors in parsing problem descriptions
Eric Hestenes, Problem Solving Group, UCSD
4:30pm-4:45pm Constraint based problem solving
Mitchell Saywitz, Computer Science Dept., USC
4:50pm-5:05pm An approach to planning and scheduling for
robot assembly lines
Xiaodong Xia, Computer Science Dept., USC
5:10pm-5:25pm Changes of mind: Revision of "interpretation"
in episodic memory
Antoine Cornuejols, Computer Science Dept. UCLA
3:30pm-5:30pm SIGART Session 2
3:30pm-355pm Facilitating parametric analyses with AI
methodologies
N. T. Gladd, JAYCOR
4:00pm-4:25pm Computer Chess: Arguments and examples for a
knowledge-based approach
Danny Kopec, Dept. of Mathematical Sciences, SDSU
4:30pm-4:55pm Artificial Intelligence applications in
information retrieval
Mark Chignell, Dept. of Industrial & Systems Engineering, USC
------------------------------
End of AIList Digest
********************
∂11-Apr-86 0355 LAWS@SRI-AI.ARPA AIList Digest V4 #77
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 11 Apr 86 03:55:16 PST
Date: Thu 10 Apr 1986 22:57-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #77
To: AIList@SRI-AI
AIList Digest Friday, 11 Apr 1986 Volume 4 : Issue 77
Today's Topics:
Bibliographies - AI Subject Codes & Report Sources &
Technical Reports #1
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: AI Subject Codes
The following is a list of subject codes that are being put in the %K
field of all bibliographies going out to AILIST. Hopefully, this will
be of assistance to people in finding material on their favorite
subfield of artificial intelligence. This searching is best done
with the bib or refer utilities but could be done less conveniently
with more general-purpose utilities.
For example, if one is interested in applications of expert systems to
electrical engineering one would search for AI01 and AA04.
←←←←←←
AI areas
AI01 Expert Systems, Rule Based Systems
AI02 Natural Language
AI03 Search (Minimax, Consistant Labelling, alpha-beta, etc.)
AI04 Learning
AI05 Speech Understanding
AI06 Vision, Pattern Recogniton
AI07 Robotics
AI08 Cognitive Science
AI09 Planning
AI10 Logic Programming (material on prolog only will be under T02)
AI11 Theorem Proving
AI12 Neural Networks, Genetic Algorithms, etc.
AI13 Decision Support
AI14 Symbolic Math
Application Areas
AA01 Medicine
AA02 Chemistry
AA03 Geology, Mineral Extraction, Petroleum Extraction and Geology
AA04 Electrical Engineering
AA05 Other Engineering, Unclassifiable Engineering
AA06 Financial, Business, Marketing, Accounting, Etc.
AA07 Education
AA08 Software Engineering, Automatic Programming, Computer Configuration
and Operation
AA09 Data Bases
AA10 Biology
AA11 Social Sciences
AA12 Statistics
AA13 Mathematics
AA14 Information Retrieval
AA15 User Interfaces to other Software
AA16 Other Physcial Science
AA17 Game Playing
AA18 Military Applications
AA19 Operating Equipment, e. g. pilots associate, autonomous land vehicle
AA20 Process Control
AA21 Diagnostic and Maintenance Systems (Other than Medical)
AA22 Configuration Systems
AA23 Agriculture
AA24 Legal
AA25 Art, Humanities, Music, Architecture, entertainment etc.
Geographical Areas
GA01 Japan
GA02 United States
GA03 Europe
GA04 Canada
Tools for AI
T01 Lisp
T02 Prolog
T03 Expert System Tools
Hardware for AI
H01 Microcomputers
H02 Lisp Machines
H03 Parallel Processing
H04 Supercomputers, e. g. Crays
Other Areas
O01 User Interfaces for AI systems
O02 Software Engineering Issues in the Construction of AI programs
O03 Real Time
O04 Fuzzy Logic, Uncertainty Issues, etc.
O05 Social Aspects of AI
Article Types
AT01 Advertisements
AT02 Product Announcements
AT03 Examples of AI Hype
AT04 Market Predictions
AT05 Interviews with Executives of Companies
AT06 Other Interviews
AT07 Book Reviews
AT08 Tutorial Articles
AT09 Bibliography
AT10 Announcements of Company University Interactions
AT11 New Bindings
AT12 Letters to the Editors
AT13 Corrections
AT14 Pronouncements of Famous People
AT15 BOOK
AT16 Company Business, e. g. new financing, revenue announcements,
joint marketing agreements etc.
AT17 Software Reviews
AT18 Articles on AI topic eduation
AT19 Notes about Grantsmanship and Research Milieu type issues
AT20 History of AI topics
AT21 Bibliography
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Report Sources
Naomi Schulman, Publications
COMPUTER SYSTEMS LABORATORY
STANFORD UNIVERSITY
Stanford, CA 94305
UCLA COMPUTER SCIENCE DEPARTMENT
University of California
3713 Boelter Hall
Los Angeles, CA 90024
Cindy Hathaway, technical reports
secretary, computer science department, louisiana state university,
baton rouge, louisiana 70803, or cindy@lsu on csnet
California Institute of Technology
Computer Science, 256-80
Pasadena California 91125
Electrical Engineering and Computer Science Departments
Stevens Institute of Technology
Castle Point Station
Hoboken, New Jersey 07030
Computer Science Department
University of Rochester
Rochester, New York 14627
Ms. Sally Goodall
Technical Reports Librarian
Computer Science Department
SUNY Albany LI 67A
Albany, New York 12222
Technical Reports
Department of Computer Science
Campus Box 1045
Washington University
St. Louis, Missouri 63130
Department of Computer Science
136 Lind Hall
University of Minnesota, Twin cities
207 Church Street SE
Minneapolis, Minnesota 55455
IBM T.J. Watson Research Center
Distribution Services, F-11, Stormytown
P.O. Box 218
Yorktown Heights, NY 10598
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Technical Reports #1
%A Bruce Abramson
%T A Cure for Pathological Behavior in Games that use Minimax
%R CUCS-153-85
%I Columbia University
%C New York City
%K AI03 AA17
%A Peter K. Allen
%A Ruzena Bajcsy
%T Integrating Sensory Data for Object Recognition Tasks
%R CUCS-184-85
%I Columbia University
%C New York City
%K AI06
%A Peter Kirby Allen
%T Object Recognition Using Vision and Touch
%R CUCS-220-85
%I Columbia University
%C New York City
%K AI06 AI07
%A Terrance E. Boult
%T Reproducing Kernels for Visual Surface Interpolation
%R CUCS-186-85
%I Columbia University
%C New York City
%K AI06
%A Terrance E. Boult
%T Visual Surface Interpolation: A Comparison of Two Methods
%R CUCS-189-85
%I Columbia University
%C New York City
%K AI06
%A Galina Datskovsky
%T Menu Interfaces to Expert Systems: Overview and Evaluation
%R CUCS-168-84
%I Columbia University
%C New York City
%K O01 AI01
%A Galina Datskovsky
%T Natural Language Interfaces to Expert Systems
%R CUCS-169-85
%I Columbia University
%C New York City
%K AI01 O01 AI02
%A Thomas Ellman
%T Generalizing Logic Circuit Designs by Analyzing Proofs of Correctness
%R CUCS-190-85
%I Columbia University
%C New York City
%K AA04
%A Bruce K. Hilyer
%A David Elliot Shaw
%T Execution of OPS5 Production Systems on a Massively Parallel Machine
%R CUCS-147-84
%I Columbia University
%C New York City
%K AI01 H03
%A Bruce K. Hillyer
%T A Knowledge-Based Expert Systems Primer and Catalog
%R CUCS-195-85
%I Columbia University
%C New York City
%K AI01 AT08
%A Hussaein A. H. Ibrahim
%A John R. Kender
%A David Elliot Shaw
%T On the Application of Massively Parallel SIMD Tree Machines
to Certain Intermediate-Level Vision Tasks
%I Columbia University
%C New York City
%R CUCS-221-85
%K AI06 H03
%A Husein A. H. Ibrahim
%A John R. Kender
%A David Elliot Shaw
%T SIMD Tree Algorithms for Image Correlation
%R CUCS-222-86
%I Columbia University
%C New York City
%K AI06 H03
%A Toru Ishida
%A Salvatore Stolfo
%T Towards the Parallel Execution of Rules in Production Systems Programs
%R CUCS-154-84
%I Columbia University
%C New York City
%K H03 AI01
%A John R. Kender
%A David Lee
%A Terrance Boult
%T Information Based Complexity Applied to Optimal Recovery of the 2 1/2-D
Sketch
%R CUCS-170-85
%I Columbia University
%C New York City
%K AI06
%A Richard E. Korf
%T Macro-Operators: A Weak Method for Learning
%R CUCS-156-85
%I Columbia University
%C New York City
%K AI04
%A Richard E. Korf
%T Depth-First Iterative-Depending: An Optimal Admissible Tree Search
%R CUCS-197-85
%I Columbia University
%C New York City
%K AI03
%A Michael Lebowitz
%T The Use of Memory in Text Processing
%R CUCS-200-85
%I Columbia University
%C New York City
%K AI02 Researcher Patent
%A Michael Lebowitz
%T Integrated Learning: Controlling Explanation
%R CUCS-201-85
%I Columbia University
%C New York City
%K AI04 Unimem
%A MIchael Lebowitz
%T Story Telling and Generalizations
%R CUCS-202-85
%I Columbia University
%C New York City
%K AI02
%A Michael Lebowitz
%T Researcher: An Experimental Intelligent Information Systems
%R CUCS-171-85
%I Columbia University
%C New York City
%K AI02 AA14
%A David Lee
%T Contributions to Information-Based Complexity, Image Understanding,
and Logic Circuit Desing
%R CUCS-182-85
%I Columbia University
%C New York City
%K AI06 AA04
%A David Lee
%T Optimal Algorithms for Image Understanding: Current Status and Future Plans
%R CUCS-183-85
%I Columbia University
%C New York City
%K AI06
%A Mark D. Lerner
%A Michael von Biema
%A Gerald Q. Maguire, Jr.
%R CUCS-146-85
%I Columbia University
%C New York City
%K PSL PPSL H03 T01
%A Mark D. Lerner
%A Gerald Q. Maguire, Jr.
%A Salvatore J. Stolfo
%T An Overview of the DADO Parallel Computer
%R CUCS-157-85
%I Columbia University
%C New York City
%K H03 AI01 T03 AI10
%A Andy Lowery
%A Stephen Taylor
%A Salvatore J. Stolfo
%T LPS Algorithms
%R CUCS-203-84
%I Columbia University
%C New York City
%K AI10 H03
%A Kevin Matthews
%T Taking the Initiative for System Goals in Cooperative Dialogue
%R CUCS-150-85
%I Columbia University
%C New York City
%K advisor AI02
%A Kevin Matthews
%T Initiatory and Reactive System Roles in Human Computer Discourse
%R CUCS-151-85
%I Columbia University
%C New York City
%K advisor AI02
%A Kathleen R. McKeown
%A Myron Wish
%A Kevin Matthews
%T Tailoring Explanations for the User
%R CUCS-172-85
%I Columbia University
%C New York City
%K AI01 AI02 O01
%A Katthleen R. McKeown
%T The Need for Text Generation
%R CUCS-173-85
%I Columbia University
%C New York City
%K AI01 AI02 O01
%A Kathleen R. McKeown
%T Discourse Strategies for Generating Natural Language Text
%R CUCS-204-85
%I Columbia University
%C New York City
%K AI02 AA09
%A Mark L. Moerdler
%A John R. Kender
%T Surface Orientation and Segmentation from Perspective Views of
Parallel-Line Textures
%R CUCS-159-85
%I Columbia University
%C New York City
%K AI06
%A Luanne Burns
%A Alexander Pasik
%T A Generic Framework for Expert Data Analysis Systems
%R CUCS-163-85
%I Columbia University
%C New York City
%K AI01 AA09
%A Alexander Pasik
%A Jans Christensen
%A Douglas Gordin
%A Agata Stancato-Pasik
%A Salvatore Stolfo
%T Explanation and Acquisition in Expert System Using Support Knowledge
%R CUCS-164-85
%I Columbia University
%C New York City
%K AI01 AA01 DTEX
%A K. S. Roberts
%T Equivalent Descriptions of Generalized Cylinders
%R CUCS-210-85
%I Columbia University
%C New York City
%A Salvatore J. STolfo
%A Daniel P. Miranker
%T The DADO Production System Machine
%R CUCS-213-84
%I Columbia University
%C New York City
%K H03 AI01
%A Salvatore J. STolfo
%A Daniel M. Miranker
%A Russel C. Mills
%T More Rules May Mean Faster Parallel Execution
%I Columbia University
%C New York City
%R CUCS-175-85
%K RETE H03 AI01
%A Salvatore J. STolfo
%A Daniel M. Miranker
%A Russel C. Mills
%T A Simple Preprocessing Scheme to Extract and Balance Implicit
Parallelism in the Concurrent Match of Production Rules
%R CUCS-174-85
%I Columbia University
%C New York City
%K H03 AI01 RETE T02 AI10
%A Peter Waldes
%A Janet Lustgarten
%A Salvatore J. Stolfo
%T Are Maintenance Expert Systems Practical Now?
%R CUCS-166-85
%I Columbia University
%C New York City
%K AI01 AA04 Automated Cable Expert Telephone AA21
%A J. F. Traub
%T Information Based Complexity
%R CUCS-162-85
%I Columbia University
%C New York City
%K AI03
%X information-based complexity is based on the assumption that
it is partial, contaminated and it costs in comparison to
ordinary complexity theory in which information is complete, exact and
free. [There were several reports on this subject. I am only including
one in this bibliography as it is not clear whether it is related to
AI or not. Contact Columbia for more info if desired. LEFF]
%A Kenneth Hal Wasserman
%T Unifying Representation and Generalization: Understanding
Hierarchically Structured Objects
%R TCUCS-177-85
%I Columbia University
%C New York City
%K AA06
%X describes a system to understand upper-level corporate management
hierarchies
%A Ursula Wolz
%T Analyzing User Plans to Produce Informative Responses
by a Programmers' Consultant
%R CUCS-218-85
%I Columbia University
%C New York City
%K AA08 AI02 AI09 AA15
%A Othar Hansson
%A Andrew E. Mayer
%A Mordechai M. Yung
%T Generating Admissible Heuristics by Criticizing Solutions to Relaxed
Models
%R CUCS-219-85
%I Columbia University
%C New York City
%K AI03
%A Carolyn L. Talcott
%T The Essence of Rum A Theory of the Intensional and Extensional Aspects of
Lisp-type Computation
%D AUG 1985
%R STAN-CS-85-1060
%I Stanford University Computer Science
%K AI11 T01
%X $9.50
%A David E. Smith
%A Michael R. Genesereth
%T Controlling Recursive Inference
%D JUN 1985
%R STAN-CS-85-1063
%I Stanford University Computer Science
%K AI11
%X $3.75
%A Matthew L.Ginsberg
%T Decison Procedures
%D MAY 1985
%R STAN-0CS-85-1064
%I Stanford University Computer Science
%K H03
%X The assumption of common rationality that is provably optimal (in a formal
sense) and which enables us to characterize precisely the communication
needs of the participants in multi-agent interactions.
.br
$2.75
%A William J. Clancey
%T Review of Sowa's Conceptual Structures
%D MAR 1985
%R STAN-CS-85-1065
%I Stanford University Computer Science
%K AT07
%X $2.75
%A William J. Clancey
%T Heuristic Classification
%D JUN 1985
%R STAN-CS-85-1066
%I Stanford University Computer Science
%K AI01
%X $4.75
%A William J. Clancey
%T Acquiring, Representing, and Evaluating a Competence Model of Diagnostic
Strategy
%D AUG 1985
%R STAN-CS-85-1067
%I Stanford University Computer Science
%K AI01 AA01
%X $4.95
%A Mark H. Richer
%A William J. Clancey
%T Guidon-Watch: A Graphic Interface for Viewing a Knowledge-Based System
%D AUG 1985
%R STAN-CS-85-1068
%I Stanford University Computer Science
%K AI01 O01
%X $2.75
%A John D. Hobby
%T Digitized Brush Trajectories
%D SEP 1985
%R STAN-CS-85-1070
%I Stanford University Computer Science
%K AI06
%X $5.75
%A Russel Greiner
%T Learning by Understanding Analogies
%D SEP 1985
%R STAN-CS-85-1071
%I Stanford University Computer Science
%K AI04
%X $15.00
%A Bruce G. Buchanan
%T Expert Systems: Working Systems and The Research Literature
%D OCT 1985
%R STAN-Cs-85-1075
%K AI01
%X $3.00
%A Deepinder P. Sidhu
%T Protocol Verification Using Prolog
%D JUL 1985
%R TR #85-21
%I Iowa State University
%K AA08 Communications ISO/ISI T02
%A M. Attisha
%A M. Yazdani
%T A Microcomputer-based Tutor for Teaching Arithmetic Skills
%D 1983
%I Department of Computer Science, University of Exeter, UK
%K H01 AA07 GA03
%A M. Attisha
%A M. Yazdani
%T An Expert System for Diagnosing childrens' Multiplication Errors
%D 1983
%I Department of Computer Science, University of Exeter, UK
%K H01 AI01 AA07 GA03
%A A. Attisha
%T A Microcomputer-based Tutoring System for Self-Improving and Teaching
Techniques in Arithmetic Skills
%D 1983
%I Department of Computer Science, University of Exeter, UK
%K H01 AA07 PET Non Borrow Subtraction Algorithm Buggy Debuggy GA03
%A M. Yazdani
%T Artificial Intelligence and Education
%D 1984
%I Department of Computer Science, University of Exeter, UK
%R Research Report NO. R122
%K H01 AI01 AA07 T02 GA03
%A J. Barchan
%A B. Woodmansee
%A M. Yazdani
%T A PROLOG-based tool for French Grammar Analysis
%I Department of Computer Science, University of Exeter, UK
%R Research Report No. R126
%K AI02 GA03 FROG AA07 T02
%A M. Yazdani
%T Intelligent Tutoring Systems: An Overview
%I Department of Computer Science, University of Exeter, UK
%R Working Paper No. W. 136
%K AI01 AA07 GA03
%A M. Yazdani
%T Artificial Intelligence, Powerful Ideas and Education
%I Department of Computer Science, University of Exeter, UK
%R Working Paper No. W 138
%K Computer Assisted Learning AA07 GA03
%A Bruce Abramson
%T An Explanation of and Cure for Minimax Pathology
%R CSD-850034
%I University of California, Los Angeles
%K AA17 AI03
%X The minimax procedure has long been the standard method of evaluating nodes
in game trees. The general assumption underlying its use in game-playing
programs is that increasing search depth improves play. Recent work has
shown that this assumption is not always valid; for a large class of games
and evaluation functions, searching deeper decreases the probability of
making a correct move. This phenomenon is called game tree pathology.
Two structural properties of game trees have been suggested as causes of
pathology: independence among the values of sibling nodes, and uniform
depth of wins and losses. This paper examines the relationship between
uniform win depth and pathology from two angles. First, it
proves mathematically that as search deepens,
an evaluation function that does not .ul ask
whether wins can be forced from mid-game positions becomes decreasingly
likely to choose forced wins. Second, it experimentally illustrates the
connection between recognition of mid-game wins and pathological behavior.
Two evaluation functions, which differ only in their ability to recognize
wins in mid-game, are run on a series of games. Despite recognizing fewer
mid-game wins than the theoretically predicted minimum needed to avoid
pathology, the function that checked for them cleared up the pathological
behavior of the one that did not.
The analytic and empirical aspects of this paper combine to form one major
result: As search deepens, so does the probability that failing to check
for forced wins will change the game's outcome. This strengthens
the hypothesis that uniform win depth is the cause of pathology.
$1.50
%A Michael Dyer
%A Eric Quilici
%T Human Problem Understanding and Advice Giving: A Computer Model
%R CSD-850039
%I University of California, Los Angeles
%K AA15 AI02 Aqua Unix Advisor AI09
%X How are people able to understand someone else's problem and provide them
with advice? How are people able to develop novel solutions to problems
they have never seen before? The thesis presented here is a first step toward
answering these questions, presenting a computer model of the process
of problem understanding and advice giving. The problems we consider are
typical planning problems that novice computer users encounter.
We view advice giving as a memory search problem, guided by heuristics for
problem understanding, advice generation, and plan creation. In this
thesis we describe a representational system for user planning problems,
show how advice can be generated using a taxonomy of planning problems and
associated heuristics for advice formulation, present heuristics that can
be used to repair failed plans and to create new plans by combining existing pl
ans in novel ways, and suggest a memory organization for planning
knowledge that allows for efficient retrieval of relevant planning experiences.
The theory discussed in this thesis is implemented in a computer program
called AQUA (Alex Quilici's UNIX|λ+ Advisor). AQUA takes natural language
descriptions of problems users are having with the UNIX operating system
and provides natural language advice that explains their failures and suggests
solutions. AQUA is also able to create solutions for problems that
it has not been presented with before.
$7.75
%A Judea Pearl
%A Azaria Paz
%T Graphoids A Graph-Based Logic for Reasoning About Relevance Relations
%I University of California, Los Angeles
%R CSD-850038
%K AI11
%X We consider 3-place relations I (x,z,y) where, x,y, and z are three
non-intersecting sets of elements, (e.g., propositions), and I (x,z,y)
stands for the statement: "Knowing z renders x irrelevant to y". We give
sufficient conditions on I for the existence of a (minimal) graph G such
that I (x,z,y) can be validated by testing whether z separates x from y
in G. These conditions define a GRAPHOID.
The theory of graphoids uncovers the axiomatic basis of probabilistic
dependencies and ties it to vertex-separation conditions in graphs. The
defining axioms can also be viewed as inference rules for deducing which
propositions are relevant to each other, given a certain state of
knowledge.
------------------------------
End of AIList Digest
********************
∂11-Apr-86 0611 LAWS@SRI-AI.ARPA AIList Digest V4 #78
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 11 Apr 86 06:11:07 PST
Date: Thu 10 Apr 1986 23:01-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #78
To: AIList@SRI-AI
AIList Digest Friday, 11 Apr 1986 Volume 4 : Issue 78
Today's Topics:
Bibliography - Technical Reports #2
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Technical Reports #2
%A C. V. Srinivasan
%T Knowledge Processing Versus Programming: CK-LOG vs PROLOG
%R DCS-TR-160
%I Rutgers University Laboratory for Computer Science
%K AI10 CK-LOG T02
%A B. A. Nadel
%T The Consistent Labeling Problem, Part 1: Background and Problem Formulation
%R DCS-TR-164
%I Rutgers University Laboratory for Computer Science
%A B. A. Nadel
%T the Consistent Labeling Problem, Part 2: Subproblems, Enumerations
and Constraint Satisfiability
%R DCS-TR-165
%I Rutgers University Laboratory for Computer Science
%A B. Nadel
%T The Consistent Labeling Problem, Part 3:
The Generalized Backtracking Algorithm
%R DCS-TR-166
%I Rutgers University Laboratory for Computer Science
%A B. A. Nadel
%T The Consistent Labeling Problem, Part 4: The Generalized
Forward Checking and Word-Wise Forward Checking Algorithms
%R DCS-Tr-167
%I Rutgers University Laboratory for Computer Science
%K AI03
%A T. M. Mitchell
%A B. M. Keller
%A S. T. Kedar-Cabelli
%T Explanation-Based Generalization: A Unifying View
%R ML-TR-2
%I Rutgers University Laboratory for Computer Science
%K analogy
%A S. T. Kedar-Cabelli
%T Analogy - From a Unified Perspective
%R ML-Tr-3
%I Rutgers University Laboratory for Computer Science
%A R. M. Kellar
%A S. T. Kedar-Cabelli
%T Machine Learning Research at Rutgers University
%R ML-Tr-4
%I Rutgers University Laboratory for Computer Science
%K AI04
%X collection of research summaries in learning from Rutgers University
%A R. Kurki-Suonio
%T Towards Programming with Knowledge Expressions
%I Carnegie Mellon Computer Science
%K H03
%D AUG 1985
%A J. Laird
%A P. Rosenbloom
%A A. Newell
%T Chunking in Soar: The Anatomy of a General Learning Mechanism
%I Carnegie Mellon Computer Science
%D SEP 1985
%K AI04
%A B. D. Lucas
%T Generalized Image Matching by the Method of Differences
%D JUL 1984
%I Carnegie Mellon Computer Science
%K AI06
%A J. B. Saxe
%T Decomposable Searching Problems and Circuit Optimization by Retiming:
Two Studies in General Transformations of Computational Structures
%D AUG 1985
%I Carnegie Mellon Computer Science
%K AA04 AI03
%A E. S. Cohen
%A E. T. Smith
%A L. A. Iverson
%T Constraint-Based Tiled Windows
%D OCT 1985
%I Carnegie Mellon Computer Science
%K AA15
%A A. Hisgen
%T Optimization of User-Defined Abstract Data Types: A Program Transformation
Approach
%D SEP 1985
%I Carnegie Mellon Computer Science
%K AA08
%A A. J. Kfoury
%A Pawl Urzyczyn
%T Necessary and Sufficient Conditoins for the Universality of Programming
Formalisms
%D MAY 1985
%R 85-007
%I Boston University Computer Science Department
%K AA08
%X $4.00
%A Bipin Indurkhya
%T Constrained Semantic Transference: A Formal Theory of Metaphors
%D JUN 1985
%R 85-008
%I Boston University Computer Science Department
%K AI02
%X $3.00
%A Bipin Indurkhya
%T Approximate Semantic Transference: A Computational Theory: A Computational
Theory of Metaphors and Analogies
%D OCT 1985
%R BUCS 85-012
%I Boston University Computer Science Department
%K AI02 AI11
%X $3.00
%A Weiguo Wang
%T Computational Linguistics Technical Notes
%D NOV 1985
%R BUCS 85-013
%I Boston University Computer Science Department
%K AI02
%X $3.00
%A Gerhart
%T A Test Data Generation Method Using Prolog
%R TR-85-02
%I Wang Institute of Graduate Studies
%K AA08
%A Velasco
%T Computer Vison and Image Understanding
%R TR-85-09
%I Wang Institute of Graduate Studies
%K AI06
%A Gerhart
%T Software Engineering Perspectives on Prolog
%R TR-85-13
%I Wang Institute of Graduate Studies
%K T02 O02
%A Gerhart
%T A Detailed Look at Some Prolog Code: A Course Scheudler
%R TR-85-14
%I Wang Institute of Graduate Studies
%K O02 T02
%A Gerhart
%T Several Prolog Packages
%R Tr-85-15
%I Wang Institute of Graduate Studies
%K T02
%A Van Nguyen
%A David Gries
%A Susan Owicki
%R CSL T.R. 85-270
%T A MODEL AND TEMPORAL PROOF SYSTEM FOR NETWORKS OF PROCESSES
%D February 1985
%I Stanford University Computer Systems Laboratories
%K AI11 AA08
%X 12 pages.....$2.40
.br
A model and a sound and complete proof system for networks of
processes in which component processes communicate exclusively through
messages is given. The model, an extension of the trace model, can
describe both synchronous and asynchronous networks. The proof system
uses temporal-logic assertions on sequences of observations - a
generalization of traces. The use of observations (traces) makes the
proof system simple, compositional and modular, since internal details
can be hidden. The expressive power of temporal logic makes it
possible to prove temporal properties (safety, liveness, precedence,
etc.) in the system. The proof system is language-independent and
works for both synchronous and asynchronous networks.
%A W. E. Cory
%T Verification of Hardware Design Correctness; Symbolic Execution Techniques
and Criteria for Consistency
%R TR 83-241
%I Stanford University Computer Systems Laboratory
%X 118 pages, $6.15
%A S. Demetrescu
%T High Speed Image Rasterization Using a Higly Parallel Smart
bulk Memory
%R TR 83-244
%I Stanford University Computer Systems Laboratory
%K AI06 H03
%X 38 pages $3.40
%A A. L. Lansky
%A S. S. Owicki
%T GEM: A Tool for Concurrency Specification and Verification
%R TR 83-251
%I Stanford University Computer Systems Laboratory
%K AI11 AA08
%X 16 pages $2.55
%H TR84-018
%A Krzysztof J. Kochut
%T UW LISP Manual
%R LSU Computer Science Technical Report 84-018
%K T01
%H TR84-025
%A E. T. Lee
%T Application of Fuzzy Languages to Medical Pattern Recognition
%R LSU Computer Science Technical Report TR84-025
%K AI06
%H TR84-026
%A E. T. Lee
%T Similarity Directed Chromosome Image Processing
%R LSU Computer Science Technical Report TR84-026
%K AI06
%H TR84-029
%A S. S. Iyengar
%A T. Sadler
%A S. Kundu
%T A Technique for Representing a Tree Structure with Predicates
by a Forest Data Structure
%R LSU Computer Science Technical Report TR84-029
%K AI10
%H TR84-030
%A Rajendra T. Dodhiawala
%A George R. Cross
%T A Distributed Problem-Solving Approach to Point Pattern Matching
%R LSU Computer Science Technical Report TR84-030
%K AI06
%H TR84-034
%A W. G. Rudd
%A George R. Cross
%T Design of an Expert System for Insect Pest Management
%R LSU Computer Science Technical Report TR84-034
%K AA23 AI01
%A George R. Cross
%A Ellen R. Foxman
%A Daniel L. Sherrell
%T Using an Expert System to Teach Marketing Strategy
%R LSU Computer Science Technical Report TR85-001
%K AI01 AA06 AA07
%H TR85-003
%A Cary G. deBessonet
%A George R. Cross
%T An Artificial Intelligence Application in the Law:
CCLIPS, A Computer Program that Processes Legal Information
%R LSU Computer Science Technical Report TR85-003
%K AA24
%H TR85-028
%A Sukhamay Kundu
%T A Theory of Multi-Relations for Uncertain Facts
%R LSU Computer Science Technical Report TR85-028
%K O04
%H TR85-032
%A Rajendra T. Dodhiawala
%A George R. Cross
%T Analysis of Cosmic Ray Tracks Using Distributed Problem-Solving
%R LSU Computer Science Technical Report TR85-032
%K AI06
%H TR85-033
%A George R. Cross
%A Cary G. deBessonet
%A Teri Broemmelsiek
%A Glynn Durham
%A Rittick Gupta
%A Mohd Nasiruddin
%T The Implementation of CCLIPS
%R LSU Computer Science Technical Report TR85-034
%H TR85-034
%A Mohd. Nasiruddin
%A M. Srikanth
%A George R. Cross
%T A Confidence Factor Extension to the YAPS Expert System Development Tool
%R LSU Computer Science Technical Report TR85-034
%K O04 T03 AI01
%H TR85-035
%A Cary G. deBessonet
%A George R. Cross
%T Some AI Techniques Used for Decision Making in Conceptual Retrieval
%R LSU Computer Science Technical Report TR85-035
%K AI13
%H TR85-037
%A Cary G. deBessonet
%A George R. Cross
%T Distinguishing Legal Language-Types for Conceptual Retrieval
%R LSU Computer Science Technical Report TR85-037
%K AA24 AA14 AI02
%H TR85-038
%A Zvieli Arie
%T A Fuzzy Relational Calculus
%R LSU Computer Science Technical Report TR85-038
%K O04
%A Eric Mjolness
%T Neutral Networks, Pattern Recognition and Fingerprint Hallucination
%R 5198:TR:85
%X 8.00 PHD THESIS
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%K AI06 AI12
%A B. H. Thompson
%A Frederick B. Thompson
%T Customizing One's Own Interface Uisng English as a Primary Language
%R 5165:TR:84
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%X $4.00
%K AI02
%A Remy Sanouillet
%T ASK French - A French Natural Language Syntax
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 5164:TR:84
%K AI02
%X 13.00 Master's Thesis
%A Michael Newton
%T Combined Logical and Functional Programming Language
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 5172:TR;85
%K AI10
%X $6.00
%A Howard Derby
%T Using Logic Programming for Compiling APL
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 5134:TR:84
%K AI10
%X $2.00
%A Bozena H. Thompson
%T Linguistic Analysis of Natural Language Communication with Computers
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 5128:TM:84
%K AI02
%X $3.00
%A Bozena Thompson
%A Fred Thompson
%T ASK As Window to the World
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%X $3.00
%R 5114:TM:84
%K AI02
%A Alain Martin
%T General Proof Rule for Procedures in Predicate Transformer Semantics
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 5075:TR:83
%K AI11 AA08
%X $2.00
%A David Trawick
%T Robust Sentence Analysis and Habitability
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 5074:TR:83
%K AI02
%X $10.00
%A Bozena H. Thompson
%A Frederick B. Thompson
%T Introducing ASK, A Simple Knowledge System, Conference on App'l
Natural Language Processing
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 5054:TM:82
%K AI02
%X $3.00
%A Bozena Thompson
%A Frederick B. Thompson
%A Tai-Ping Ho
%T Knowledgeable Contexts for User Interfaction, Proc Nat'l Computer
Conference
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 5051:TM:82
%K AI02
%X $2.00
%A Barry Megdal
%T VLSI Computational Structures Applied to Fingerprint Image Analysis
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 5015:TR:82
%K AI06
%X $15.00
%A Charles R. Lang
%T Concurrent, Asynchronous Garbage Collection Among Cooperating Processors
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 4724:TR:82
%K H03
%X $2.00
%A Sheue-Ling Lien
%T Toward A Theorem Proving Architecture
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 4653:TR:81
%K AI11
%X $10.00 MS Thesis
%A Leonid Rudin
%T Lambda Logic
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 4521:TR:81
%K AI10
%X $8.00 MS THESIS
%A Tzu-mu Lin
%T From Geometry to Logic
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 4298:TR:81
%K AI10 AA13
%X $7.00 MS THESIS
%A Jim Kajiya
%T Toward A Mathematical Theory of Perception
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 4116:TR:79
%K AI08 AI06
%X $25.00 PHD THESIS
%A Fred Thompson
%A B. Thompson
%T Shifting to a Higher Gear in a Natural Language System
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 4128:TM:81
%K AI02
%X $2.00
%A Bozena H. Thompson
%A Frederick Thompson
%T REL System and REL English, REL Report no. 22
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 3999:TM:76
%K AI02
%X $3.00
%A B. H. Thompson
%A Fred B. Thompson
%T Rapidly Extendible Natural Language
%I California Institute of Technology, Computer Science
%C Pasadena, California 91125
%R 3975:TM:80
%K AI02
%X $3.00
%A Garrett M. Odell
%A J. T. Bonner
%T How the Dictyostelium Discoideum Grex Crawls
%R 85-1
%I Computer Science Department, Rensselear Polytechnic Institute
%K AI07
%A David L. Spooner
%A Michael A. Milicia
%A Donald B. Faatz
%T Modeling Mechanical CAD Data With Data Abstraction and
Object-Oriented Techniques
%R 85-19
%I Computer Science Department, Rensselear Polytechnic Institute
%K AA05
%A N. Prywes
%A B. Szymanski
%T Programming Supercomputers in an Equational Language
%R 85-24
%I Computer Science Department, Rensselear Polytechnic Institute
%K AI10 H04
%A N. Prywes
%A Y. Shi
%A J. Tseng
%A B. Szymanski
%T Supersystem Programming with the Model Equational Language
%R 85-26
%I Computer Science Department, Rensselear Polytechnic Institute
%K AI10 H04
%A Martin Hardwick
%A Lin Kan
%A Goutam Sinha
%A Subhendu Lahiri
%A Zia Mohammed
%A Nisar Yakoob
%T Design and Implementation of a Data Manager for Design Objects
%R 85-34
%I Computer Science Department, Rensselear Polytechnic Institute
%K AA05
%A D. Nagel
%T Some Considerations on Extracting Definitional Information About
Relations
%R CBM-TM-85
%D APR 1980
%I Rutgers University, Department of Computer Science
%K AI02 AA09 AI04
%X Several of the current systems in Artificial Intelligence are
represented in binary relational databases and rely on the semantics
of relations as a source of knowledge for information retrieval.
Examples of these systems include those developed by Lindsay [5,6],
Raphael [10], Elliott [2], Brown [1], and Sridharan [11]. In these
systems inferences can be made from a set of properties specified for
each relation. Inferences can also be made from specified
associations between relations. One interesting aspect is the degree
to which making these inferences can be automated. Some methods are
proposed in this paper for using machine learning to extract
relational properties and recognize semantic ties between relations so
that this definitional information will not have to be prespecified.
In some cases these methods may not technically be categorized as
learning because they primarily involve summarization. It is also
difficult to pin down what is encompassed by semantics. However, this
paper discusses concepts of learning, and the presentation is directed
at capturing semantics.
.sp 1
Extracting definitional information or more broadly, learning
semantics of relations, provides a base for the study of interesting
databases. This could be done in a symbiotic system where the
interaction between the researcher and the system provides a means for
improving the performance of the system in general and
obtaining new insights in the scientific data. It could also be
coupled with a system for automatic theory formation. Presently,
applications using semantics of relations for making inferences have
been most successful in areas where properties and relationships are
well understood such as kinship relations.
------------------------------
End of AIList Digest
********************
∂11-Apr-86 1031 LAWS@SRI-AI.ARPA AIList Digest V4 #79
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 11 Apr 86 10:27:23 PST
Date: Thu 10 Apr 1986 23:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #79
To: AIList@SRI-AI
AIList Digest Friday, 11 Apr 1986 Volume 4 : Issue 79
Today's Topics:
Bibliography - Technical Reports #3
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Technical Reports #3
%A N. S. Sridharan
%T Representational Facilities of AIMDS: A Sampling
%R CBM-TM-86
%D 1/82
%I Rutgers University, Department of Computer Science
%K AI01
%X The quest for fundamental and general mechanisms of intelligence,
especially problem solving and heuristic search techniques, that
guided early research in Artificial Intelligence has given way in the
last decade to the search for equally fundamental and general methods
for structuring and representing knowledge. This is the result of the
realization that a duality exists between knowledge and search:
Knowledge of the task domain can abbreviate search and search thru a
problem space can yield new knowledge. AIMDS is one of the recently
developed systems which permits experimentation with knowledge
representation in the course of building an AI program.
%A C. F. Schmidt
%T The Role of Object Knowledge in Human Planning
%R CBM-TM-87
%I Rutgers University Department of Computer Science
%K AI08 AI09
%D 1/82
%X AI research on planning provides an important reference point from
which the cognitive psychologist can build an understanding of human
planning. It is argued that the human planning context differs from
this reference point due to the incomplete knowledge that persons
typically possess about the situation within which the plan will be
executed. Various types of general functional knowledge about objects
are then defined. This knowledge serves as a source of default
assumptions for use in the planning process, and thus allows planning
to continue despite the absence of complete knowledge of the planning
situation. However, such assumption-based expectations must be
tested. From this point of view, planning must also include a process
for a kind of hypothesis testing and plan revision. The implications
of this claim are briefly discussed.
%A S. Amarel
%T Initial Thoughts on Characterization of Expert Systems
%R CBM-TM-88
%I Rutgers University Department of Computer Science
%D 1/82
%K AI01
%X Expertise in a given domain is commonly characterized by skillful,
high performance, problem solving activity in the domain. An expert
solves problems in a domain more rapidly, more accurately, and with
less conscious deliberation about his plan of attack than a novice
does. An excellent discussion of general characteristics of expert
behavior appears in a recent article in @u(Science) by Larkin et al.
[1].
.sp 1
Expert behavior is equivalent to high performance problem solving
behavior in a specific domain. It requires: knowledge of the domain,
knowledge of problem solving schemas and methods, knowledge/experience
about solution of specific problems in the domain with given methods,
knowledge about special properties and regularities in the problem
space, and highly effective ways of @u(using) all these bodies of
knowledge in approaching the solution of new problems in the domain.
Essentially, expert problem solving requires the
conceptualization/formulation of a given problem within a framework
wherein knowledge is embodied in definitions of states, moves,
constraints, evaluation functions, etc. in such a way that solutions
are attained with very little search. In other words, an expert
problem solver works within a highly 'appropriate' problem
representation: he describes situations and problem types within
'appropriate' conceptual frameworks, he specifies problem
decompositions that minimize subproblem interactions, he often uses
hierarchies of abstractions in his planning, he uses 'macromoves'
where a novice would painstakingly have to piece together elementary
moves, and he has rules for early recognition of unpromising as well
as of promising developments. An expert problem solver behaves as if
the great variety of knowledge sources needed for his
solution-construction activities are available to him in a
@u(compiled) procedural form.
.sp 1
Usually, expertise in a domain requires @u(problem solving experience)
in the domain. One can be scholar in a domain, and not an expert--if
he does not know how to effectively @u(apply) domain knowledge to a
variety of specific situations. Also, expertise implies a certain
amount of robustness in performance-- which means that it is not
sufficient to know how to handle a few 'textbook' cases; it is
important to be able to handle a broad range of variations.
%A S. Amarel
%T Review of Characteristics of Current Expert Systems
%R CBM-TM-89
%I Rutgers University Department of Computer Science
%D 3/81
%K AI01
%X This report does not cover all current work in the area of Expert
systems. It is intended to introduce a set of dimensions for
characterizing Expert systems and to describe some of the important
Expert systems that are now in existence (or are under active
development) in terms of these dimensions.
.sp 1
We have a dual purpose: (a) to illustrate via concrete examples the
dimensions that are being introduced, and (b) to show what is the
current state of the field from the perspective of this system of
dimensions.
.sp 1
We are using here ten main dimensions, and an optional eleventh called
@ux(Special Features), which provides added flexibility for the
presentation of relevant information about a system. Two of the main
dimensions, @ux(Performance) and @ux(Utility), are concerned with the
quality of the system's behavior and the impact of the system on the
domain of application and on AI. Another two dimensions are concerned
with the system's scope, its ability to handle situations that are
outside its area of major expertise, and its ability to improve: they
are called @u(Breadth, Intelligence, Robustness) and @u(Expertise
improvement ability). The remaining six dimensions are concerned with
the type of tasks performed by the system, its structure and its means
of interacting with users: they are called @u(Task type, Main Method,
Mode of Knowledge Representation, User Interface for main task,
Explanation facilities) and @u(Reasoning under Uncertainty).
.sp 1
The systems considered are DENDRAL, CASNET/GLAUCOMA, MACSYMA, MYCIN,
INTERNIST, PROSPECTOR and CRYSALIS.
.sp 1
This report covers material which was prepared for inclusion in the
Chapter 'What are Expert Systems' (co-authored with Ron Brachman, Carl
Engelman, Robert Engelmore, Edward Feigenbaum and David Wilkins) of a
book on Expert Systems which is currently under preparation; the book
is based on the Rand Workshop on Expert Systems which took place in
San Diego, California on August 25-28, 1980.
%A John Kastner
%A Sholom M. Weiss
%T A Precedence Scheme for Selection and Explanation of Therapies
%R CB-TM-90
%I Rutgers University Department of Computer Science
%D 3/81
%K AA01 AI01
%X A general scheme to aid in the selection of therapies is described. A
topological sorting procedure within a general production rule
representation is introduced. The procedure is used to choose among
competing therapies on the basis of precedence rules. This approach
has a degree of naturalness that lends itself to automatic explanation
of the choices made. A system has been implemented using this
approach to develop an expert system for planning therapies for
patients diagnosed as having ocular herpes simples. An abstracted
example of the system's output on an actual case is given.
%A P. Politakis
%A S. M. weiss
%T A System for Empherical Experimentation with Expert Knwoledge
%R CBM-TM-91
%I Rutgers University, Department of Computer Science
%D 1/82
%K AI01 AA01 rheumatology
%X An approach to the acquisition of expert knowledge is presented based
on the comparison of dual sources of knowledge: expert-modeled rules
and cases with known conclusions. A system called SEEK has been
implemented to give to the expert interactive advice about rule
refinement. SEEK uses a simple frame model for expressing
expert-modeled rules. The advice takes the form of suggestions of
possible experiments in generalizing or specializing rules in the
model. This approach has proven particularly valuable in assisting
the expert in domains where two diagnoses are difficult to
distinguish. Examples are given from an expert consultation system
being developed for rheumatology.
%A G. Drastal
%A C. Kulikowski
%T Knowledge Based Acquisition of Rules for Medical Diagnosis
%R CBM-TM-92
%I Rutgers University, Department of Computer Science
%D 10/81
%K AA01 AI01
%X Medical consultation systems in the EXPERT framework contain rules
written under the guidance of expert physicians. We present a
methodology and preliminary implementation of a system which learns
compiled rule chains from positive case examples of a diagnostic class
and negative examples of alternative diagnostic classes. Rule
acquisition is guided by the constraints of physiological process
models represented in the system. Evaluation of the system is
proceeding in the area of glaucoma diagnosis, and an example of an
experiment in this domain is included.
%A N. S. Sridharan
%T AIMDS: Applications and Performance Enhancements
%R CBM-TM-93
%I Rutgers University, Department of Computer Science
%D 1/82
%K AI01 AA24
%X AIMDS is a programming environment (language, editors, display drivers, file
system) in which several programs are being constructed for modeling
commonsense reasoning and legal argumentation. The main obstacle to realistic
applications in these and other areas is system performance when the knowledge
bases used are scaled up one or two orders of magnitude. The other obstacle
is user performance resulting from the complexity of constructing and debugging
large scale knowledge bases. This proposal argues that performance enhancement
of AIMDS as a system is needed and that the usual solutions of software tuning
have been exhausted and that new hardware ideas fitted to the characteristics
of the task need to be experimented with. We adopt as important constraints:
the requirement that existing programs should receive graded enhancement of
performance, maintaining continuity of application programs; that user programs
should not reflect changing machine configurations or architectures. Redesign
and recoding of AIMDS should provide the necessary opacity to the user.
With these constraints in mind, we suggest interim solutions and long-term
solutions. The interim solutions include: converting large-address space
personal Lisp machines with bit-mapped graphics; fast coding of low-level
functionalities via microprogramming. The long-term solutions include the
building and testing of multiprocessors. The long-term solutions open up
a number of rather difficult software and hardware research problems whose
solutions depend upon having good facilities to experiment in the search for
answers.
%A B. Lantz
%T The AIMDS Interactive Command Parser
%R CBM-TM-94
%I Rutgers University, Department of Computer Science
%D 9/82
%K AA24 AI01 T03
%X Characters entered by the user are parsed immediately in order to provide
interactive services to the user while he is entering commands. Services
provided to the user include immediate verification of syntax, supplying the
user with information about the correct syntax and semantics of a command,
completion of long descriptive atom names, pretty printing the entered
command, and defining special functions for selected characters. The parser
accepts user defined grammars, thus providing a useful command parser for a
great variety of applications.
%A B. Lantz
%T The AIMDS On-Line Documentation Facility
%R CBM-TM-95
%I Rutgers University, Department of Computer Science
%D 9/82
%K AA24 AI01 T03
%X The documentation system for the AIMDS language is designed to be suitable
for both beginning and expert users, and to be capable of serving the needs
of a changing system such as AIMDS. The documentation must be quickly and
easily updatable, and the updated information should be available to, and
easily used by, a wide variety of users.
%X This paper is a short description of the documentation system for the
AIMDS language. It includes a discussion of the considerations taken during
the design of the documentation system, a description of the implemented
system, and instructions for using the system for other documentation tasks.
%A J. Roach
%A N. S. Sridharan
%T Implementing AIMDS on a Multiprocessor Machine. Some Considerations
%R CBM-TM-96
%D 4/83
%I Rutgers University, Department of Computer Science
%K AA24 T03 H03 AI01
%X As a possible long term solution for performance enhancement of AIMDS,
a Lisp based multiprocessor system was proposed. Converting an
existing AI knowledge based system from the current uniprocessor
environment into a multiprocessor based regime is a largely unexplored
research question. This report discusses some of the issues raised by
such a proposal and attempts to evaluate some of the current models of
parallel processing in regards to implementing an AIMDS based system.
An extensive bibliography with commentary is included.
%A George A. Drastal
%A Casimir A. Kulikowski
%T Knowledge-Based Acquisition of Rule for Medical Diagnosis
%R CBM-TM-97
%I Rutgers University, Department of Computer Science
%D 11/82
%K AI01 AA01 T03
%X Medical consultation systems in the EXPERT framework contain rules written
under the guidance of expert physicians. We present a methodology and
preliminary implementation of a system that learns compiled rule chains
from positive case examples of a diagnostic class and negative examples
of alternative diagnostic classes. Rule acquisition is guided by the
constraints of physiological process models represented in the system.
Evaluation of the system is proceeding in the area of glaucoma diagnosis,
and an example of an experiment in this domain is included.
%A S. Weiss
%A K. Kern
%A C. Kulikowski
%A M. Uschold
%T A Guide to the Use of the EXPERT Consultation System
%R CBM-TR-94
%I Rutgers University, Department of Computer Science
%D 1/82
%K T03 AI01
%X EXPERT is a system for designing and applying consultation models.
An EXPERT model consists of hypotheses (conclusions), findings
(observations), and rules for logically relating findings to
hypotheses. Three phases of model development are outlined for users
of the system. These include: the design of a decision-making model,
compilation of the model, and consultation using the model. The
facilities of the system are described, and examples of models and
consultation sessions are presented.
%A R. Banerji
%A T. Mitchell
%T Description Languages and Learning Algorithms: A Paradigm for Comparison
%R CBM-TR-107
%D 1/82
%I Rutgers University, Department of Computer Science
%K AI04 Inductive inference, learning, generalization, description languages.
%X We propose and apply a framework for comparing various methods for
learning descriptions of classes of objects given a set of training
exemplars. Such systems may be usefully characterized in terms of
their descriptive languages, and the learning algorithms they employ.
The basis for our characterization and comparison is a
general-to-specific partial ordering over the description language,
which allows characterizing learning algorithms independent of the
description language with which they are associated. Two existing
learning systems are characterized within this framework, and
correspondences between them made clear.
------------------------------
End of AIList Digest
********************
∂12-Apr-86 0109 LAWS@SRI-AI.ARPA AIList Digest V4 #80
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Apr 86 01:08:45 PST
Date: Fri 11 Apr 1986 22:19-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #80
To: AIList@SRI-AI
AIList Digest Saturday, 12 Apr 1986 Volume 4 : Issue 80
Today's Topics:
Queries - Shape & LOOPS on a XEROX,
Application - Automatic Documentation,
Policy - Press Releases,
Journal - AI Expert,
Review - Spang Robinson Report, Volume 2 No. 4,
Philosophy - Lucas on AI & Computer Consciousness
----------------------------------------------------------------------
Date: Thu 10 Apr 86 09:52:08-PST
From: Ken Laws <LAWS@SRI-IU.ARPA>
Subject: Shape
Jerry Hobbs has asked me "What is a hook and what is a ring that we know
the ring can hang on the hook?" More specifically, what do we have to
know about hooks and rings in general (for default reasoning) and
about a particular hook-like object and ring-like object (dimensions,
radius of curvature, surface normals, clearances, tolerances, etc.)
in order to say whether a particular ring may be placed on a particular
hook and whether it is likely to stay in place once put there. Can we
reason about the functionality of shapes (in this and in other "mechanics"
problems) without resorting to full CAD/CAM descriptions, physics,
and simulation? How do people (e.g., children) reason about shape,
particularly in the intuitively obvious cases where tolerances are not
critical? Can anyone suggest a good lead?
-- Ken Laws
------------------------------
Date: Fri, 11 Apr 86 15:03 CST
From: Brick Verser <BAV%KSUVM.BITNET@WISCVM.WISC.EDU>
Subject: LOOPS running on a XEROX
Does anybody have any information pertaining to applications running
under Loops on Xerox hardware?
------------------------------
Date: Thu, 10 Apr 86 20:30:50 pst
From: saber!matt@SUN.COM (Matt Perez)
Subject: Re: towards better documentation
>
> I am interested in creating an expert system to serve as on-line
> documentation. The intent is to abrogate the above law and
> corollaries. Does anyone know of such a system or any effort(s) to
> produce one?
>
Contact Mark Miller of Computer*Thought, in Dallas,
Texas. They may have what you are looking for. Mark
is a pretty friendly guy and may also point you to the
right literature, etc.
The other place I can think of where there's something
like this in development is the work being done by
Prof. Wilensky at UCBerkely: The Unix Consultant.
Matt Perez
------------------------------
Date: Thu 10 Apr 86 09:03:06-PST
From: GARVEY@SRI-AI.ARPA
Subject: Re: Policy - Press Releases
Eschew Policy!! Let Ken handle it; if you don't like what he let's
through, don't read it (I have ↑O on my terminal for just such
situations). Nobody wastes your time but you....
(And maybe me.)
Cheers,
Tom
[Unfortunately ↑O doesn't work for those reading the "unexploded"
digest format. Most mail programs haven't adapted to the digest
formats yet. -- KIL]
------------------------------
Date: Thu 10 Apr 86 11:17:53-PST
From: Ken Laws <LAWS@SRI-IU.ARPA>
Subject: AI Expert
The April issue of CACM has an ad (p. A-31) for AI Expert, a new journal
for AI programmers. "No hype, no star wars nonsense, no pipe dreams.
AI Expert will focus on practical AI programming methods and applications."
AI Expert, 2443 Fillmore Street, Suite 500, San Francisco, CA 94115;
$27 for 13 issues, $33 Canada, $39 worldwide, $57 airmail.
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Spang Robinson Report, Volume 2 No. 4 Summary
Summary of The Spang Robinson Report, Volume 2 No. 4
April 1986
Packaging Financial Expertise
Activities at specific companies and available products:
Applied Expert Systems (APEX) Plan Power:
Expert system for personal financial planning. (less than $50K including
Xerox 1186 on which to run on)
Arthur D. Little:
Personal Financial Planning System (in test), Equity Trader's
Assistant, Cash Trader's Assistant, insurance personnel selection
system, investment manager's work station, bond indenture advice
system. (in development) (will run on Symbolics 3670 and use databases
residing on IBM mainframe)
Cognitive Systems:
Courtier, a stock portfolio management system. One version is design
for individual use at public terminals with another to assist bank
portfolio managers. Runs on Apollo's and DEC VAX.
Human Edge Software:
is supporting development of Financial Statement Analysis, an expert
business evaluation program, and a busines plan expert for IBM PC's.
Palladian Software:
Financial Advisor which is designed to help with corporate financial
decison-making and project evaluation capabilities. It is based upon
net present value.
Prophecy Development Corporation:
Profit tool, a brokerage and financial services shell. Runs on MS-DOS
computers and costs $1995.00.
Sterling Wentworth Corporation:
Planman, $4500.00
Database $2000.00
For CPA's to produce "comprehensive financial planning reports" Runs on MSDOS
and they have sold 400 units
Syntelligence:
Underwriting Advisor System
Lending Advisor System.
Delivered on IBM 30 and 43 series with connections to PC/AT's.
Nikko Securities Co and Fujitsu:
(under development) a system for selecting stocks for investment.
Daiwa Securities:
Placed a system to provide investment councelling into operation last
month.
Yamaichi Securities:
developing AI based investment products in collaboration with Hitachi
and Nippon Univak.
Nomura Securities:
This is Japan's largest stock broker and they embarking on a broad-
based AI R&D program.
The total revenue for financial expert systems is five million
dollars. In 1985, financial applications were five percent of all
epxert systems. In 1986, it is expected to be twenty percent. One in
five of large financial institutions have applied expert systems.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Japan Watch:
Nomura research Institute has developed a tool called DORSAI for assisting
in the production of expert systems in PROLOG.
Hitachi's Energy Research Institute have developed a system for
proving theorems at a high speed. Hitachi applied for a patent and it
employs the "Connection Graph technique." Hitachi will use this
system for VLSI logic design, factory automation, real-time failure
diagnostics, chemical compound synthesis, and hardware applications.
60 percent of Japanese corporations are beginning to utilize AI or are
studying such a move. 28 percent of Japanese hardware, software
companies and heavy computer users have plans to enter into AI while
32 percent are currently involv ed in AI. 52 percent of the companies
with plans to enter AI expressed an interest in expert systems.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Micro Trends:
Discussion of Borlands's Turbo prolog including reactions from Arity, Quintus
Prolog, GOld Hill, CIGNA.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
News:
Amoco Corporation and IntelliCorp announced a joint venture to market AI
products for molecular biology. The first new product will be Strategene.
Boeing Computer Services and Carnegie Federal Systems Corproation will
be working together on a Rome Air Development center contract to
develop a "new engineering environment."
Carnegie Federal Systems will support TRW in developing AI software for
tactical mission planning and resource allocation functions.
Qunintus Computer Systems has over 270 users with 170 of them using Quintus
on the workstaiton. Revenues for 1985 were $2.1 million dollars with
an 18 percent profit margin.
Rapport, a DBMS may soon be available for Symbolics machines. It will run
not only in single user mode but as a multi-user file server.
UC Berkeley is developing a RISC based LISP machine with multiple
processors (SPUR project). The Aquarius project involves using separate
processors for numeric, Prolog and LISP processing, each optimized for
its specific rule.
Frank Spitznogle who formerly was President and chief operating officer of Lisp
Machines is now President and chief operating officer of Knowledge Systems of
Houston Texas. They will be applying AI to oil and gas industry.
Their first product will be an exploration-potential evaluation
consultant.
Thomas Kehler is now chairman and CEO of Intellicorp.
------------------------------
Date: Thu, 10 Apr 86 11:33:07 est
From: John McLean <mclean@nrl-css.ARPA>
Subject: Lucas on AI
> From: Stanley Letovsky <letovsky@YALE.ARPA>
> At the conference on "AI an the Human Mind" held at Yale early in
> March 1986, a paper was presented by the British mathematician John
> Lucas. He claimed that AI could never succeed, that a machine was in
> principle incapable of doing all that a mind can do. His argument went
> like this. Any computing machine is essentially equivalent to a system
> of formal logic. The famous Godel incompleteness theorem shows that for
> any formal system powerful enough to be interesting, there are truths
> which cannot be proved in that system. Since a person can see and
> recognize these truths, the person can transcend the limitations of the
> formal system. Since this is true of any formal system at all, a person
> can always transcend a formal system, therefore a formal system can
> never be a model of a person.
Stanley Letovsky tries to refute this argument by showing that a
formal description that describes Lucas' beliefs may have unprovable
assertions that Lucas nevertheless believes.
> What is critical to realize, however, is that the Godel sentence
> for our model of Lucas is not a belief of Lucas' according to the model.
> The form of the Godel sentence
> G: not(provable(G))
> is syntactically distinct from the form of an assertion about Lucas'
> beliefs,
> believes(p,t)
> Nothing stops us from having
> believes(G,t)
> be provable in the system, despite the fact that G is not itself
> provable in the system.
This view of what G must look like is too restrictive. Note that the
Godel sentence for a system of first order arithmetic is an assertion
in number theory ("There is no integer such that..."). The fact that
the assertion numeralwise represents an assertion about provability
takes a great deal of showing. Similarly, the Godel sentence for our
model of Lucas may be an assertion about what Lucas believes. Since
Lucas is going to have beliefs about his beliefs and what is provable
in the system, it's not hard to believe that we can construct a
self-referential sentence G such that Lucas believes G at t but
believe(G,t) is not a theorem. This is particularly plausible since
there is a strong connection between what Lucas believes and what is
provable in the system. In particular, believes(believes(x,y),t) will
be provable iff Lucas believes that he believes x at y. But it is
plausible to assume that Lucas believes that he believes x at y iff
he believes x at y, i. e., iff believes(x,y) is provable. In other words,
the belief predicate is a provabilility predicate for the system restricted
to statements about beliefs.
To fill this out, note that we will probably have that if
believes("not(believes(x,y))",t) then not(believes(x,y)) since if Lucas
believes that he doesn't believe p, then he doesn't believe p. Now consider
G: believes("not(G,t)",t).
If our system is consistent and such a G exists, G is not provable. If
G were provable, then not(G) would also be provable given our observation
since G is a statement about belief.
I believe that it is possible to construct such a sentence G, but this
does not imply that we can't dismiss Lucas. Lucas' argument is unconvincing
since there is no reason to believe that for any formal system, I can see and
recognize the Godel sentence for that system. Godel sentences for a particular
system are long and complicated. Hence, there is no reason to believe
that Lucas surpasses every formal system. In fact, it is clear that
there is at least one formal system that can recognize as true a sentence
that Lucas can't. Consider the system that contains one axiom:
"is a sentence that Lucas will never recognize as true when appended
to its own quotation" is a sentence that Lucas will never recognize
as true when appended to its own quotation.
The system recognizes the sentence as true since it's an axiom; Lucas
doesn't.
John McLean
mclean@nrl-css
...!decvax!nrl-css!mclean
------------------------------
Date: Thu, 10 Apr 86 11:19:13 EST
From: tes%bostonu.csnet@CSNET-RELAY.ARPA
Subject: Computer Consciousness
Informal talk on computer consciousness:
The whole family of questions like "Can computers feel emotion?"
and "Is it possible for a computer to be conscious?" define a loaded,
emotionally-charged subject. Some people (especially "artistic" folks, in
my experience) give an immediate emphatic "NO!" to these questions;
other people (many Science-is-The-Answer-to-Everything sorts)
devise computational models that parallel what we know about physical
brain structure and conclude "yes, of course"; and other folks remain
somewhere in the middle or profess "it's too complicated - I don't know."
My main beef with some physicalist or reductionist opinions is
the *assumption* that nothing except physical events exist in the universe,
and that a physical or functional description of a system describes its
essence entirely, and therefore if the human brain's neural interactions are
simulated by some machine then this machine is for all intents and purposes
equivalent to a human mind. To me, the phenomenological red that I perceive
when looking at an apple is OBVIOUSLY real, as is my consciousness. It is
ridiculous to conclude that consciousness and phenomenological experiences
do not exist simply because they cannot be easily described with mathematics
or the English language.
My main beef with immediate emphatic "NO"s is that it may reflect
an emotional fear of examining "taboo" territories, perhaps
because such inquiry threatens the Meaning of Life or the sovereignty
of the human mind.(There is no need to expound on how much suffering this
attitude has brought upon our ancestors throughout history.) To find out
that the human mind is "just this" or "just that" would significantly alter
certain worldviews.
The possibilites that are left to me are either that
1) Consciousness "emerges" from the functionality of the
brain's neural interactions (if this is true, then it
would be entirely possible, in principle, for a computer
program with the same functionality to generate consciousness),
2) There is a dualism of the mental and the physical with
mysterious interactions between the two realms, and
3) Other possibilities which no one has thought of yet.
Now the first two may seem ridiculous, and I have no idea how to
prove or disprove them, but they remain *possibilities* for me because
they are not yet disproven. The physicalist proposal, on the other hand, is
proven wrong (or rather its absolute universality is proven wrong) by the
simplest introspective observation.
I am not campaigning for a ceasing of all brain research or
cognitive science; these sciences will continue to yield useful information.
But I hope that these researchers and their fans do not delude themselves
into thinking that the only aspect of the universe which exists is the
aspect that science can deal with.
Tom Schutz
CSNET: tes@bu-cs
ARPA: tes%bu-cs@csnet-relay
UUCP: ...harvard!bu-cs!tes
------------------------------
End of AIList Digest
********************
∂12-Apr-86 0312 LAWS@SRI-AI.ARPA AIList Digest V4 #81
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Apr 86 03:12:00 PST
Date: Fri 11 Apr 1986 22:37-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #81
To: AIList@SRI-AI
AIList Digest Saturday, 12 Apr 1986 Volume 4 : Issue 81
Today's Topics:
Bibliography - Technical Reports #4
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Technical Reports #4
%R CBM-TR-109
%T The Role of World Knowledge in Planning
%A N.S. Sridharan
%A C.F. Schmidt
%A J.L. Goodson
%D 1/82
%I Rutgers University, Department of Computer Science
%K AI09 common-sense
%X Common-sense planning demands a rich variety of world knowledge. We
have examined here the view that world knowledge can be structured to
form the interface between a hierarchy of action types and a hierarchy
of types of objects. World knowledge forming this interface includes
not only the traditional statements about preconditions and outcomes
of actions, but also the normal states of objects participating in the
actions and normative actions associated with the objects.
Common-sense plans are decomposed into goal-directed, preparation, and
the normative components. This has heuristic value and may serve to
simplify the planning algorithm. The algorithm invokes world
knowledge for goal customization, action specification, computation of
preconditions and outcomes, object selection, and for setting up
subgoals.
%R CBM-TR-110
%I Rutgers University, Department of Computer Science
%D 5/80
%T An Experimental Transformation of a Large Expert Knowledge
%A R.N. Goldberg
%A S.M. Weiss
%K internist AI01 AA01
%X An experiment is described in which a significant part of the
INTERNIST knowledge base for diagnosis in internal medicine is
translated into an EXPERT model. INTERNIST employs the largest and
broadest knowledge base of all the medical consultation systems which
have been developed in recent years. EXPERT is a general system for
designing consultation models. The translated model shows reasonable
competence in the final diagnostic classification of 431 test cases.
There are differences in the internal representation and reasoning
strategies of the two systems. However, when a knowledge base has
been encoded in a relatively uniform manner, this experiment
demonstrates the feasibility of transfer of knowledge between
large-scale expert systems.
%R CBM-TR-111
%I Rutgers University, Department of Computer Science
%D 6/80
%T A Process for Evaluating Tree-Consistency
%A J.L. Goodson
%X General knowledge about conceptual classes represented in a concept
hierarchy can provide a basis for various types of inferences about an
individual. However, the various sources of inference may not lead to
a consistent set of conclusions about the individual. This paper
provides a brief glimpse at how we represent beliefs about specific
individuals and conceptual knowledge, discusses some of the sources
of inference we have defined, and describes procedures and structures
that can be used to evaluate agreement among sources whose conclusions
can be viewed as advocating various values in a tree partition of
alternate values.
%R CBM-TR-112
%I Rutgers University, Department of Computer Science
%D 9/80
%T "A Methodology for the Construction of Natural
Language Front Ends for Medical Consultation System
%A V. Ciesielski
%D 1/82
%K AI01 AI02 AA01
%X A methodology for constructing natural language front ends for
Associational Knowledge type (AK-type) medical consultation systems is
described. AK-type consultation systems use associational knowledge
of the form "if A and B and C then conclude D with a weight of w" to
perform diagnostic reasoning. It is shown that the knowledge needed
to "understand" patient description is not the associational knowledge
in the consultation system but rather knowledge of structural
relations and the way they are expressed in surface language. The two
main structural relations involved are: (1) ATTRIBUTE of OBJECT =
VALUE. Surface forms of this relation are variants and augmentations
of the template "The X of Y is V". (2) OBJECT have-component
COMPONENT. Surface forms of this relation are variants and
augmentations of the template "The X has/contains/includes Y". This
kind of knowledge can be represented in the
Attribute-Component/Structured Object (AC/SO) package which was
developed as part of this research. The AC/SO package is given a
definition of the @u(concept) "PATIENT" for a disease area and the
corresponding lexicon.
%R DCS-TR-118
%I Rutgers University, Department of Computer Science
%D 9/82
%T Transformational Programming--Applications to Algorithms and
Systems
%A Robert Paige
%K AA08
%X Transformational programming is a nascent software development
methodology that promises to reduce programming labor, increase
program reliability, and improve program performance. Our research
centers around a prototype transformational programming system called
RAPTS (Rutgers Abstract Program Transformation System), developed
during the past several years at Laboratory for Computer Science
Research. Experiments in RAPTS with algorithm derivations are
expected to lead to pragmatic applications to algorithm design,
program development, and large system construction.
%R DCS-TR-115
%I Rutgers University, Department of Computer Science
%D 4/82
%T A Survey of Research in Strategy Acquisition
%A R. Keller
%D 7/82
%X This paper surveys literature in the area of strategy acquisition for
artificial and human problem solving systems. A unifying view of the
term "strategy" is suggested which places strategies along a continuum
from abstract to concrete. Major concerns of strategy acquisition
research are described, including (i) strategic component learning,
(ii) strategy applicability recognition, (iii) strategy customization
and (iv) strategy transformation. Various researchers' approaches to
these issues are reviewed and open problems are discussed.
%R DCS-TR-114
%I Rutgers University, Department of Computer Science
%D 3/82
%T The Control of Inferencing in Natural Language Understanding
%A Abe Lockman
%A David Klappholz
%K AI02
%X The understanding of a natural language text requires that a reader
(human or computer program) be able to resolve ambiguities at the
syntactic and lexical levels; it also requires that a reader be able
to recover that part of the meaning of a text which is over and above
the collection of meanings of its individual sentences taken in
isolation.
%X The satisfaction of this requirement involves complex inferencing from
a large database of world-knowledge. While human readers seem able to
perform this task easily, the designer of computer programs for
natural language understanding faces the serious difficulty of
algorithmically defining precisely the items of world-knowledge
required at any point in the processing, i.e., the problem of
@i[controlling inferencing]. This paper discusses the problems
involved in such control of inferencing; an approach to their solution
is presented, based on the notion of determining where each successive
sentence "fits" into the text as a whole.
%R DCS-TR-113
%I Rutgers University, Department of Computer Science
%D 4/82
%T Consistent-Labeling Problems and Their Algorithms: Part II
%A B. Nudelo
%D 10/82
%K AI14 AI10 AI03 inter-variable compatibility
%X A new parameter is introduced to characterize a type of search
problem of broad relevance in Artificial Intelligence, Operations
Research and Symbolic Logic. This paramater, which we call
inter-variable @b[compatibility] is particularly important in that
complexity analyses incorporating it are able to capture the
dependence of problem complexity on search order used by an algorithm.
Thus compatibility-based theories can provide a theoretical basis for
the extraction of heuristics for choosing good search orderings - a
long-sought goal for such problems, since it can lead to significant
savings during search. We carry out expected complexity analyses for
the traditional Backtrack algorithm as well as for two more recent
algorithms that have been found empirically to be significant
improvements, Forward Checking and word-wise Forward Checking. We
extract compatibility-based ordering-heuristics from the theory for
Forward Checking. Preliminary experimental results are presented
showing the large savings that result from their use. Similar savings
can be expected for other algorithms when heuristics taking account of
inter-variable compatibilities are used. Our compatibility-based
theories also provide a more precise way of predicting which algorithm
is best for a given problem.
%A B. Nudel
%T Understand Consistent-Labelling Problems and Their Algorithms and
Their Algorithms: Part I
%R DCS-TR-112
%D (forthcoming)
%I Rutgers University, Department of Computer Science
%K AI14 AI10 AI03
%R DCS-TR-109
%I Rutgers University, Department of Computer Science
%D 12/81
%T Equations - The "Improved Constraint Satisfaction Algorithms
using Inter-Variable Compatibilities"
%A B. Nudel
%K consistent labeling AI03
%X This report addresses the problem of improving algorithms for solving
@b(consistent-labeling) (also called @b(constraint-satisfaction)
problems. The concept of @b(compatibility) between variables in such
problems is introduced. How to obtain compatibilities analytically
and empirically is discussed, and various compatibility-based
heuristics (as well as some useful but less effective non
compatibility-based heuristics) are developed to improve a version of
the Waltz algorithm which was found best of a set of consistent-labeling
problem algorithms tested by Haralick [5]. Empirical results with
these heuristics are very encouraging, with over an order of magnitude
improvement in performance with respect to the basic algorithm on a
set of randomly generated consistent-labeling problems.
%R DCS-TR-107
%D 10/81
%I Rutgers University, Department of Computer Science
%T Note on Learning in MDS Based on Predicate Signatures
%A C.V. Srinivasan
%K AI06 AI04
%X This note illustrates a simple learning scheme in the context of two
examples. In the first example the system learns the distinguishing
features of the letters in the English alphabet, where each letter is
described in terms of a relational system of features. In the second
example the domain is a set of family relationships. In this case the
system identifies invariant properties like
"father.father=grandfather" that exist in the domain.
.sp 1
In both examples the system first creates an abstraction of the given
set of relations and uses the abstraction to identify invariant (or
distinguishing) features of the given set of relations. The
abstraction scheme is based on the concept of "predicate signatures"
that is described in the note.
.sp 1
The method is a general one. It can be used to identify large classes
of invariant (or distinguishing) features of sets of objects where
each object is described in terms of a set of relations that hold true
for the object.
%R DCS-TR-106
%I Rutgers University, Department of Computer Science
%D 10/81
%T Knowledge Representation and Problem Solving in MDS
%A C. V. Srinivasan
%K AI11
%X This work presents a new approach for using a first order theory to
generate procedures for solving goal satisfaction problems without
using general theorem proving. The core of the problem solving system
has three basic components: an inferencing mechanism based on
@u(residues), a control structure for "means-end" analysis that uses
@u(natural deduction), and a generalization scheme that is based on
the structure of statements in the domain theory itself.
.sp 1
The work represents a beginning in the development of knowledge based
systems that can generate their own problem solving programs, evolve
with experience and adapt to a changing domain theory.
%R DCS-TR-95
%D 10/80
%I Rutgers University, Department of Computer Science
%T A Mini-Max Problem
%A W.L. Steiger
%K AI03
%X Determine an algorithm, better than complete enumeration, for the
following problem: given a non-negative integer matrix, permute the
entries in each column independently so as to minimize the largest row
sum. This problem had arisen in determining an optimal scheduling
for a factory work force.
%R DCS-TR-92
%D 4/80
%T Average Case Behavior of the Alpha-Beta Tree Pruning Algorithm
%A George Shrier
%D 1/82
%I Rutgers University, Department of Computer Science
%K AI03
%R DCS-TM-15
%I Rutgers University, Department of Computer Science
%D 3/81
%T Some Experiments in Abstraction of Relational Characteristics
%A R.M. Keller
%A D.J. Nagel
%K AA09 AI01 AI04
%X Two experiments performed in knowledge-based inference are discussed in this
paper. The experiments are directed at abstracting
structural regularities and patterns inherent in a database of binary
relations. A novel graph representation to facilitate abstraction is
used in approaching some classical problem areas. This representation
is compact and powerful, and an efficient algorithm has been developed
to help control the exhaustive nature of certain types of inductive
problems.
.sp 1
One area of experimentation concerns the discovery of intensionally definable re
lations in a
family database. Another is the recognition of alphabetic characters
using directional relations defined for points on a grid. Within a
test bed system, KBLS, a scheme for computing abstractions is briefly
summarized, and implications for future extensions are discussed in
light of experimental results.
%R DCS-TM-16
%I Rutgers University, Department of Computer Science
%D 3/83
%T Solving the Plane Geometry Problem by Learning
%A Liben Xu
%K AI01 AA13 AI14
%X The top-down technique for solving a geometry problem is described.
The top-down method uses "general rules," they are obtained by
learning. This report focuses on general heuristics to obtain the
general rules for solving a geometry problem.
%R DCS-TR-89
%D 5/80
%I Rutgers University, Department of Computer Science
%T Parts I, II, III of KNOWLEDGE BASED LEARNING SYSTEMS DS + CVS = A
Proposal for Research CVS = An Intro. to the Meta-Theory & Logical
Foundations
%A D. Sandford
%K AI01 AI04
%X Current state of the art experience in designing domain specific,
intelligent, automated problem solving systems argues convincingly
that: Firstly, large amounts of what is known as domain dependent or
domain specific knowledge is crucial to achieving acceptable
efficiency in realistic problem solving situations; and secondly that
the task of implementing such systems "from scratch" is such a
formidable one that it has impeded experimental research into the
nature and role of domain specific knowledge in problem solving.
.sp 1
This project is directed towards attaining an understanding of the
processes and types of organizations required for an automated system
to be able to learn for itself the relevant domain dependent
knowledge from its experience with the domain. The research is based
on a meta-theory of knowledge based learning systems, systems that can
discover domain knowledge and use it to solve problems in a domain.
The research project will employ both experimentation with implemented
systems and theoretical analysis of systems. The goals are to shed
light on both the detailed mechanisms by which domain dependent
knowledge increases search efficiency, and to understand the type of
innate biases that an automated system needs, to be able to analyze a
domain and discover the appropriate domain knowledge. The research is
based on a meta-theory of systems that are both knowledge based
systems and learning systems.
.sp 1
The research focuses on two kinds of systems: Systems that can build
and use empirical theories of domains, and systems that use Axiomatic
theories and theorem proving. The nature of domain knowledge and ways
of using it in both these systems are investigated.
------------------------------
End of AIList Digest
********************
∂12-Apr-86 0536 LAWS@SRI-AI.ARPA AIList Digest V4 #82
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Apr 86 05:36:11 PST
Date: Fri 11 Apr 1986 22:40-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #82
To: AIList@SRI-AI
AIList Digest Saturday, 12 Apr 1986 Volume 4 : Issue 82
Today's Topics:
Bibliography - Technical Reports #5
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Technical Reports #5
%R DCS-TR-90
%I Rutgers University, Department of Computer Science
%D 5/80
%T Knowledge-based learning, an Example
%A C. V. Srinivasan
%D 1/82
%K AI04
%X How may a machine "learn" from examples of situations that are
presented to it? What may constitute the "knowledge" of a set of such
situations? How should the examples be presented to the machine? Are
there general principles which a machine can use to acquire the
knowledge automatically by examining the examples presented to it, and
to use the knowledge so obtained to solve problems in a domain? These
are the general concerns of my research.
%R CBM-TR-138
%D 5/84
%T Hardware Fault Diagnosis & Expert Systems
%A Allen Ginsberg
%D 5/84
%K AI01 AA04 AA21
%I Rutgers University, Department of Computer Science
%X Recent research in Expert Systems has begun to deal with problem
domains that do not fit into the "classification problem" mode, the
latter being the sort of problems that have been most amenable to
Expert System technology. Hardware Fault Diagnosis(HFD) is an example
of such a problem domain. Problems in HFD typically involve a
"localization" problem as a component, i.e., @i[where] is the location
of the fault? This paper takes a critical look at some current work
in HFD, viz. Genesereth, Davis, with a view towards determining the
differences between classification and localization problems that are
likely to necessitate new approaches to knowledge representation and
acquisition if Expert Systems are to be successful in such a domain.
%R CBM-TR-139
%I Rutgers University, Department of Computer Science
%D 5/84
%T Localization Problems and Expert Systems
%A Allen Ginsberg
%K AI01
%X Expert systems approaches to problem solving have recently had
enormous success and influence in the field of AI. The most
successful of these systems tend to deal with a certain kind of
problem type which have been called "classification problems." Very
recently, we have seen the emergence of a number of expert systems
that deal with a different category of problem, a category that I will
call "localization problems." The purpose of this paper is to
characterize this class of problems, contrast it with the
classification problem category, give some examples of localization
problems, and suggest some new avenues for expert system research
dealing with problems in this category.
%R CBM-TR-140
%I Rutgers University, Department of Computer Science
%D 5/84
%T Investigations in the Mathematical Theory of Problem Space
Representations and Problem Solving Methods
%A Allen Ginsberg
%K AI09
%X In this paper I address the issue of how a system that has the ability
to do problem solving and planning - in the sense of being in
possession of generalized schemas or templates for carrying out these
activities - can know whether a particular type of planning or, if you
will, problem solving strategy, is a "good" one to employ in solving
problems in a particular domain? It seems to me that, in general, in
order to make such judgements in a reasonable fashion a problem solver
must either be in possession of some general theoretical facts
concerning the nature and structure of problem types, i.e., a theory
of problem types, or at the very least, have been programmed by
someone having such a theory. This paper is a step in the direction
of constructing such a theory.
.sp 1
The structure of the paper is as follows. First I discuss the nature
of problem solving and planning in general, and give a preliminary
description of a particular planning template. Next I describe and
illustrate a mathematical framework within which one can formulate
problem representations. Finally I deal with the question of what
facts about the structure of a problem representation are relevant to
the determination of whether or not the aforementioned planning
template is applicable to the problem at hand.
%R CBM-TR-141
%D 5/84
%I Rutgers University, Department of Computer Science
%T Representation & Problem Solving: Theoretical Foundations
%A Allen Ginsberg
%X The word "representation" and its cognates is probably the most
popular word in AI today. If anything qualifies as "the fundamental
assumption of AI," it is probably the view that intelligence is
essentially the ability to construct and manipulate symbolic
@i[representations] of some "reality" in order to achieve desired
ends. Furthermore, probably every researcher in AI would agree that
the key to AI's success lies with the general area known as "knowledge
representation." This point of view has been buttressed not only by
the failures of early "general purpose" AI systems, but much more so
by the recent success of expert systems. The philosophy behind the
expert systems approach is one that has, rightfully come to infect the
entire field of AI: intelligence essentially depends upon the ability
to @i[represent] and store a potentially vast amount of knowledge in ways
that enable it to be easily accessed and utilized in the performance
of various tasks. The key concept here is @i[representation].
%X Given the fact that AI has come to embrace these doctrines, and the
likelihood that there is a good deal of truth in them, it is incumbent
upon us to examine their foundations, for better or for worse. It
would be nice to have answers to questions such as What is a
representation?, When are two or more representations representations
of the same or different real world situations?, What are the ways in
which representations can be "manipulated?" It would be even nicer if
the answers to such questions were provided by a general formal theory
of representation. In this paper I attempt to lay some of the
groundwork for such a theory, with emphasis on the role of
representation in problem solving.
%R CBM-TR-142
%D 5/84
%T A Model for Automated Theory Formation for Problem Solving Systems"
%A A. Ginsberg
%X The goal of this paper is to contribute towards the understanding and
eventual mechanization of the processes whereby an @i[intelligent]
problem solver @i[learns] to improve its performance in a given task
domain by formulating and using @i[theories] regarding that domain. In
order to achieve this goal it is necessary for us, as designers of
such a system, to have a fairly good idea of a) the various sorts of
knowledge that are required for a problem solver to acquire new
knowledge that will hopefully improve performance, and of b) how each
of these types or sources of knowledge comes into play in this
process. In this paper I give an abstract description of the domains
of knowledege required for theory formation, and also illustrate the
ideas with a concrete example. The type of system contemplated in
this paper incorporates ways of structuring background knowledge that
are natural and will, I believe, prove to be useful in designing
self-improving AI programs.
%X In this paper I address the issue of how a system that has the ability
to do problem solving and planning - in the sense of being in
possession of generalized schemas or templates for carrying out these
activities - can know whether a particular type of planning or, if you
will, problem solving strategy, is a "good" one to employ in solving
problems in a particular domain? It seems to me that, in general, in
order to make such judgements in a reasonable fashion a problem solver
must either be in possession of some general theoretical facts
concerning the nature and structure of problem types, i.e., a theory
of problem types, or at the very least, have been programmed by
someone having such a theory. This paper is a step in the direction
of constructing such a theory.
%X The structure of the paper is as follows. First I discuss the nature
of problem solving and planning in general, and give a preliminary
description of a particular planning template. Next I describe and
illustrate a mathematical framework within which one can formulate
problem representations. Finally I deal with the question of what
facts about the structure of a problem representation are relevant to
the determination of whether or not the aforementioned planning
template is applicable to the problem at hand.
%R CBM-TR-143
%D 5/84
%I Rutgers University, Department of Computer Science
%T A Knowledge Representation Framework for Expert Control of
Interactive Software Systems
%A Apte, C.
%A S. Weiss
%K AI01 AA08
%X Expert problem solving strategies in many domains make use of
detailed quantitative or mathematical techniques coupled with
experiential knowledge about how these techniques can be used to solve
problems. In many such domains, these techniques are available as part
of complex software packages. In attempting to build expert systems
in these domains, we wish to make use of these existing packages, and
are therefore faced with an important problem: how to integrate the
existing software, and knowledge about its use, into a practical
expert system. We define a framework of a @i[hybrid model] for
representing problem solving knowledge in such domains. A hybrid
model consists of a @i[surface] and a @i[deep] model. The surface
model is the production rule-based expert subsystem that is driven by
domain specific control and interpretive knowledge. The deep model is
the existing software, reorganized as necessary for its interpretation
by the surface model. We present an outline of a specialized
form-based system for acquisition and representation of expert
knowledge required for this hybrid modeling.
%R CBM-TR-144 (THESIS)
%I Rutgers University, Department of Computer Science
%D 9/84
%T A Framework for Expert Control of Interactive Software Systems
%A C.V. Apte
%K AI01 AA08
%X Expert problem-solving strategies in many domains require the use of
detailed mathematical techniquers coupled with experiential knowledge
about how and when to use the appropriate techniques. In many of
these domains, such techniques are made available to experts in large
software packages. In attempting to build expert systems for these
domains, we wish to make use of these existing packages, and are
therefore faced with an important problem: how to integrate the
existing software, and knowledge about its use, into a practical
expert system. The expert knowledge is used, in dynamic selection of
appropriate programs and parameters, to reach a successful goal in the
problem-solving. This kind of expert problem-solving is achieved
through two interacting bodies of knowledge; problem domain knowledge,
and knowledge about the programs that comprise the software package.
%X This thesis describes the framework of a @i[hybrid expert system] for
representing problem-solving knowledge in these domains. This hybrid
system may be characterized as consisting of a @i[surface] model and a
@i[deep] model. The surface model is a production-rule based expert
subsystem that consists of heuristics used by an expert. The deep
model is a collection of methods, each parameterized by a set of
controlling and observed parameters. The method and their results are
reasoned about using their parameter sets. The existing software is
reorganized as necessary to map it into the deep model structure of a
hybrid system. This framework has evolved out of an effort to build an
expert system for performing well-log analysis (ELAS - @i[Expert Log
Analysis System]). A generalized expert-system building methodology
based upon principles drawn from ELAS is introduced. The use of
@i[method-abstractions] in assembling a hybrid system is discussed.
The notion of @i[worksheet-reasoning] is defined, and discussed.
%R CBM-TR-145 (THESIS)
%D 10/84
%T Shift of Bias for Inductive Concept Learning
%A Paul E. Utgoff
%K AI04
%X We identify and examine the fundamental role that bias plays in
inductive concept learning. Bias is the set of all influences,
procedural or declarative, that causes a concept learner to prefer one
hypothesis to another. Much of the success of concept learning
programs to date results from the program's author having provided the
learning program with appropriate bias. To date there has been no
good mechanical method for shifting from one bias to another that is
better. Instead, the author of a learning program has himself had to
search for a better bias. The program author manually generates a
bias, from scratch or by revising a previous bias, and then tests it
in his program. If the author is not satisfied with the induced
concepts, then he repeats the manual-generate and program-test cycle.
If the author is satisfied, then he deems his program successful. Too
often, he does not recognize his own role in the learning process.
.sp 1
Our thesis is that search for appropriate bias is itself a major part
of the learning task, and that we can create mechanical procedures for
conducting a well-directed search for an appropriate bias. We would
like to understand better how a program author does about doing his
search for appropriate bias. What insights does he have? What does
he learn when he observes that a particular bias produces poor
performance? What domain knowledge does he apply?
.sp 1
We explore the problem of mechanizing the search for appropriate
bias. To that end, we develop a framework for a procedure that shifts
bias. We then build two instantiations of the procedure in a program
called STABB, which we then incorporate in the LEX learning program.
One, called "constraint back propagation" uses analytic deduction. We
report experiments with the implementations that both demonstrate the
usefulness of the framework, and uncover important issues for this
kind of learning.
%R CBM-TR-146
%I Rutgers University, Department of Computer Science
%D 5/85
%T A Framework for Representation of Expertise in Experimental Design
for Enzyme Kinetics
%A Von-Wun Soo
%A Casimir A. Kulikowski
%A David Garfinkel
%K AA10 AI01
%X In this paper, we present part of our current research on expert
systems in enzyme kinetics. Because of the richness and diversity of
the problem solving knowledge required in this domain, we have found
it to be an excellent vehicle for studying issues of knowledge
representation and expert reasoning in AI. Biochemical experimental
design, the focus of this paper, is a major problem solving activity
of the enzyme kineticist that has not been explored by expert systems
researchers. Their problem solving expertise can usually be described
as the application of a sequence of methods. In designing a
complicated biochemical experiment, the experimenter has several
methods to choose from at any stage. These methods are represented as
computer programs which can be organized into a hierarchy. This paper
proposes a structure for these problem solving methods and an expert
consultation system for experimental design.
.sp 1
We have found that problem solving expertise in experimental design
can be divided into three phases. In the first phase, we deal with
problems of selecting the experimental methods that satisfy an
experimenter's goal, given certain postulated models. The
experimental conditions and optimal design points can be derived if
the model is given and the goal and the assumptions of the optimal
design criterion are satisfied. In the second phase, we deal with the
problems of preparing an enzyme assay. The interactions among
experimental conditions and other influencing factors must be
carefully controlled so that the correct concentration of a given
species can be calculated. In the third phase, we face the problem of
analyzing and interpreting the experimental data and recommending
further refinement of the experiment.
------------------------------
End of AIList Digest
********************
∂13-Apr-86 0153 LAWS@SRI-AI.ARPA AIList Digest V4 #83
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Apr 86 01:53:08 PST
Date: Sat 12 Apr 1986 23:39-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #83
To: AIList@SRI-AI
AIList Digest Sunday, 13 Apr 1986 Volume 4 : Issue 83
Today's Topics:
Bibliography - Recent Articles #1
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Articles #1
%J ComputerWorld
%D FEB 10, 1986
%V 20
%N 6
%P 44+
%K AI01 AT10 AA15 University of Calgary GA04 Synerlogic
%X "Synerlogic, Inc. has joined with the University of Calgary (Alberta)
in a project to develop an expert system to assist in converting subject matter
knowledge into computer-based training courseware."
%T Martinizing
%J Datamation
%D FEB 15, 1986
%P 19
%N 4
%V 32
%K AT14 AT13 AT12 AA08 James Martin Knowledge Ware
%X "I wish to correct a serious error in your article
on 'building a Better Program'
(Oct. 1 p 42).
The TI Tools do not, as you state, have 'levels of integration far in advance of
DDI.'
Exactly the opposite is true. A brief session with the DDI tools and the Ti too
ls
would reveal immediately that the TI tools cannot compare in richness and
functionality with the DDI tools. The DDI tools use artificial intelligence
techniques and are a generation beyond TI. THE DDI tools already have extensive
use in MIS organizations. The direct implementation of my own implementations i
s
in DDI.
On Dec. 1 DDI changed its name to KnowledgeWare to reflect the AI knowledge base
of its tools. This is as far as I am aware, the first practical application of
AI techniques for automating the planning, analysis and design of systems.
James Martin
Tuppeny House
Tuckerstown, Bermuda
(begin section by editors)
James Martin appears to be confused about the 'tools' to which user sources
are referring in the story. The sources compared the relative merits of (to
date) unannounced tools under development at TI and DDI - into software
productivity tools currently used in MIS organizations. Though Martin
has consulted with TI on the use of his methodology and is entitled to his
opinion, company sources say that he isn't in a position to effectively comment
on it upcoming tools or to compare them with those of DDI. We'll have to wait
for the marketplace to do that. -- ED"
%T World Watch
%J Datamation
%P 60
%D FEB 15, 1986
%P 19
%N 4
%V 32
%K GA01 India Institute for New Generation Technology
%X "India hopes to curry favor with Japan's Institute for New Generation
Computer Technology. So, a few of India's premier fifth-generation researchers
may
soon be packing their bags for a trip to India."
%A John R. Dixon
%T Will Mechanical Engineers Survive Artificial Intelligence
%J Mechanical Engineering
%D FEB 1986
%P 8+
%V 108
%N 2
%K AA05 AT14
%X Raj Reddy stated 'In the twenty-first century, much of what mechanical
engineers now do will be done by machines' The rest of the editorial,
discusses whether this is a reality.
%A Howard K. Dicken
%T Turning Micros Into Mavens
%J High Technology
%D MAR 1986
%P 71
%V 6
%N 3
%K AI01 H01 Expertelligence Macintosh AT16 Migent Software Intellicorp
Enrich Transform Logic AA15
%X Expertelligence, which sells expert-system shells, Lisp and Prolog
for the Macintosh had fiscal 1985 revenue of $834,000.
Losses were $411,000. Intellicorp had 1985 sales of $8.7 million
dollars with a loss of $724,000. Migent Software
has purchased an expert system for interfaces to user software from
Transform Logic. The software is called Enrich and sells for $595.00
%A Stanley Aronoff
%A Glyn F. Jones
%T From Data to Image to Action
%J IEEE Spectrum
%D DEC 1985
%V 22
%N 12
%P 45-52
%K AI06 Mult-Spectral Scanner Landsat AA03 forestry crop yields Cropcast AI01
%X discusses various aspects of the hardware for image processing.
Crop forecasting now achieve 97% accuracy with 95% accuracy three months
prior to harvest. They state that expert systems will combined with image
processing to create a new generation of information systems.
%T Adept, Kawasaki in Japan Accord
%J Electronic News
%D FEB 10, 1986
%P 45
%V 32
%N 1588
%K AI07 GA01 GA02 AT16 AI06
%X Adept has licensed Kawasaki Heavy industries to manufacture and sell
its robotics line in Japan. Adept estimated it will receive one million
dollars in the next three years. This also includes the AdeptVision systems.
Adept has shipped more than $500,000 worth of robotics equipment ot
Kawaski since last September
%T Notes: Software and Services
%J ComputerWorld
%D JAN 27, 1986
%P 33
%V 20
%N 4
%K LogicWare MProlog Revelations Control Data Corp Cyber H04 T02 AT16
%X Logicware and Revelations Research have joined efforts to put a
version of MProlog on the Control Data Corp's Cyber 205
%A Eric Bender
%T DBMS tools: Not Natural Yet
%D JAN 20, 1986
%P 19+
%V 20
%N 3
%K Ashton-Tate H01 AA09 AI02 Clout Lotus Development Human Access Language
Brodie Associates
%X Interviews with various people about natural language and data base
management systems, particularly for micros. Of note, David Hull
of Ashton Tate said that although they are evaluating natural language
systems, they have not seen any that deliver the benefits that they
think their clients want
%T New Products
%D JAN 20, 1986
%P 85
%V 20
%N 3
%K Experience in Software Idea Generator H01 AI01
%X Experience in Software, Inc. announced the Idea Generator a tool
to help the user solve problems. It costs $195.00 and runs on the IBM PC.
%A Steven Burke
%T Arity/Prolog Tools Assist in Creating AI-Based Software
%J InfoWorld
%D JAN 20, 1986
%P 14
%V 8
%N 3
%K Unitek Technology Arity Dr. vance Giboney Arthur Young and Company
Peter Gabel Darryl Rubin Kim Frazier AI01 AA06 AA08 GA04 H01 T01 T02
%X Unitek Technologies is using Arity's tools to enhance accounting
sofware. Knowledgeware is using it to automate writing computer
code which is being developed in conjunction with Arthur Young
and Company.
%A Alice LaPlante
%T Talking with your Computer
%J InfoWorld
%V 8
%N 2
%D JAN 13, 1986
%P 25-26
%K Digital Equipment Corporation DECtalk AI05 AI01
%X general discussion of applications of uses of voice input and output
systems. DEC says that 90 percent of its customer's use DECTALK for
telephone applications; it expects that its next generation system
will have voice recognition and voice synthesis as part of an expert system.
%A Keith Thompson
%T Q&A is Fun, Useful Business System
%J InfoWorld
%V 8
%N 2
%D JAN 13, 1986
%K AI02 AA15 AA09 H01 AT17
%X review of Q and A, which is a database and word processor claiming
to be based on artificial intelligence. It has a natural language interface
to the data base. It uses AI to tell where the address is in a letter
automatically to print out the envelope. It received a rating of 9.0
out of 10 with very good in performance and excellent in documentation,
ease of learning, ease of use, error handling, support and value.
%A Barbara Robertson
%T The AI Typist: Writing Aid is Fast and Easy, But Bug Plagued
%J InfoWorld
%V 8
%N 2
%D JAN 13, 1986
%P 35
%K AT17 AT03 H01 AA15
%X AI Typist is a word processing system for IBM PC's that "uses
artificial intelligence to provide a real-time typist." The program
scans a dictionary looking for character-by-character matches while typing.
It highlights characters at the point it finds a mismatch. For example,
if a user types appearing, highlighting appears as one types the second
a since ape matches a word in the dictionary. It doesn't correct the
spelling nor allow the user to look at the dictionary. It also had
bugs in the basic word processing capability. It received a 2.4 out
of 10 with unacceptable ratings under performance and value, poor
in documentation, satisfactory in error handling and very good under
ease of learning, ease of use and support.
%T TI Introduces PC Scheme Lisp Device
%J InfoWorld
%V 8
%N 2
%D JAN 13, 1986
%P 51
%K T01 H01 AT02
%X TI has introduced PC Scheme for $95.00 which runs on IBM PC's and
TI Instruments PC's. It has a compiler.
%T Advertisement
%J Unix/World
%V 11
%N 11
%D DEC 1985
%P 56
%K Silogic Knowledge WorkBench AT01 AI01 AI02 AA09 T03
%T For the Record
%J Unix/World
%V 11
%N 11
%D DEC 1985
%P 10
%K Flexible Computer NASA Johnson Space Center Unix
%X "NASA's Johnson Space Center, Dallas, has purchased a massively
parallel Flex/32 Computer from Flexible Computer Corp. for its
Artificial Intelligence section, which is responsible for evaluating
fifth generation computing systems for AI development and applications."
%T Review of Introduction to Robotics by Arthur J. Critchlow
%J BYTE
%V 11
%N 1
%D JAN 1986
%P 57-60
%K AI07 AT07
%T Software Notes
%J ComputerWorld
%D JAN 13, 1986
%V 20
%N 2
%P 25+
%K Inference Corp NASA Symbolics AT16 AI01 AA08
%X "Inference Corp. and the National Aeronautics and Space Administration have
agreed to develop jointly a software development workstation design Inference's
expert system technology." It will use Symbolics 3600 and assist both in reuse
of code and generation of new code.
%A Edward Warner
%T Gold Hill, Intel Developing LISP for Multimicroprocessors
%J ComputerWorld
%D JAN 13, 1986
%V 20
%N 2
%P 26
%K T01 H03
%X Intel announced an Agreement with Gold Hill to develop and market jointly
a Common Lisp Computer Intel's HyperCube IPSC.
%T Executive Report/Expert Systems
%J ComputerWorld
%D JAN 13, 1986
%V 20
%N 2
%P 43-62
%K T01 T02 T03 H02 AI01 AA04 AA08 AA21 AT08 O02 Sterling Wentworth PlanMan
Fountain Hills Software semiconductor Travellers Insurance Teknowledge
%X Half of all Fortune 500 companies actively pursue expert system
development. Fountain Hills Software sells Fair Cost, a cost
modelling program for semiconductor components Sterling Wentworth
Corpo offers Planman, a financial panning expert system targetted at
tax advisors. Ion Technology Services, markets Diagnostic
Troubleshooter an expert system for the maintenance of specialized
semiconductor equipment The expert system market is worth 75 million
with government and research efforts account for as much as two thirds
of this. Fortune 500 companies efforts make up most of the rest of
the market. Custom Development Life Span for Expert Systems compiled
by Arthur D. Little in developing 30 "large-scale strategic
knowledge-based systems, typically for Fortune 500 companies:"
.TS
tab(~);
c c c c
l l l l.
Phase~Duration~Level of Effort~Cost
Proof of Concept:~4 to 6 months~1 to 2 man years~$150,000 to $400,000
Demonstration~4 to 6 months~1 to 2 man years~$150,000 to $400,000
Prototype~12 to 18 months~8 to 12 man years~$1.2 to 2.4 million
Total Resource Comm~20 to 30 months~10 to 16 man years~$1.5 to 3.2 milion
.TE
Travelers Insurance developed a successful expert system to help diagnose
failures on IBM 8100 controllers. It had 70 rule programs and was done with
Teknowledge's M-1 system.
%T SuperShorts
%J ComputerWorld
%D JAN 13, 1986
%V 20
%N 2
%P 108
%K AI01 AA20 T03 H02 AT16
%X Lisp Machine, Inc. and the Process Management Divison of Honeywell announced
that they will work to bring artificial intelligence to the process control
market. As part of that effort, they will work together to interface
PICON with the Honeywell control system TDC3000.
[In Applied Artificial Intelligence, it was reported that this interfacing
was already accomplished and is running at one site. LEFF]
%T GM Delco to Fund Cognex Vision Systems
%J Electronic News
%D JAN 6, 1986
%V 32
%N 1583
%P 58
%K AI06 AA04 AT16
%X Delco Electronics has agreed to give COGNEX $500,000 to develop
an engineering prototype of a machine vision system for automatically
inspecting the placement of surface-mounted devices on printed
circuit boards. Cognex's Checkpoint 1100 system was reported
to have achieved measurements accurate to within 2 mils within 99.8
percent of the test cases.
%A James Fallon
%T Racal Electronics, Norsk Data to End 2-Year AI Joint Venture
%J Electronic News
%D JAN 13, 1986
%V 32
%N 1584
%P 29
%K AT16 GA03
%X A 1.44 million joint investment to develop an artificial
system was terminated since the project was delayed and the market
for that particular product no longer existed.
%T Plessey to Develop Speech-Input CPU
%J Electronic News
%V 31
%N 1582
%D DEC 30, 1985
%P 8
%K Alvey Edinburgh University Imperial College University of Loughborough
AI05 GA03 H03
%X The Alvey Directorate has selected Plessey as a prime
contractor in a 19.88 million dollar project to develop
a system that receives human speech and displays the words on the screen.
No vocabulary size or response time was given for the proposed system.
It will use parallel processing
%A Peggy Watt
%T Scanner Puts Text On-Line
%J ComputerWorld
%D Dec 30, 1985/JAN 6, 1986
%V 19
%N 52
%P 1+
%K Dest Corporation AI06 AT02
%X DEST Company announced PCSCAN which is a system that recognizes
the type faces in most business documents. It costs $3000.00.
The optical reader equipment supports 300 dpi printers.
The optical reader alone is $1995.00. The software inserts
appropriate formatting codes for such things as tabs, paragraphs
and page breaks.
%A J. Mostow
%T Forword: What is AI? And What Does It Have to Do with Software Engineering?
%J IEEE Transactions on Software Engineering
%V SE-11
%N 11
%D NOV 1985
%P 1253-1256
$K AA08
%A R. Balzert
%T A Fifteen Year Perspective on Automatic Programming
%J IEEE Transactions on Software Engineering
%V SE-11
%N 11
%D NOV 1985
%P 1257-1267
%K AI08 SAFE AA08 Insformation Sciences Institute GIST RSL TRW
symbolic evaluation software maintenance POPART PADDLE
%X SAFE was a system that took up to a dozen informal sentences
that specified a piece of software and produced a formal specification.
GIST is a formal specification language that
attempted to minimize the translation from the way people think
about processes to the way they write about them.
They developed a prototype of a system to convert GIST to
natural language and they have a joint effort underway
with TRW to design a system to convert RSL specifications to
natural language. They also developed a system to
symbolically evaluate GIST specifications. They also have a natural
language behavior explainer.
%T New Products/Microcomputers
%J ComputerWorld
%D FEB 24, 1986
%V 20
%N 8
%P 89
%K T01 H02 Practical Artificial Intelligence VAX DS-32 AP-10
%X Practical Artificial Intelligience has announced the DS-32 and AP/10
which are attached processors for the IBM personal computer and Digital
Equipment VAX designed to support artificial intelligence.
The DS-32 costs $2700 and the AP/10 costs $6000
%T Ben Rosen's Ansa: Will it Ever be Another Lotus?
%J Business Week
%D MAR 3, 1986
%P 92-95
%V 2935
%K Paradox SRI AA09 H01
%X discusses the founding and prospects for Paradox, a data base system
with artificial intelligence features
%A Mary Petrosky
%T Expert Software Aids Large Systems Design
%J Infoworld
%V 8
%N 7
%P 1+
%K AI01 AA08 H01 AT02 AT03
%X Knowledge-Ware announced Information Engineering Workstation that
provides tools for data flow diagrams and action diagrams. It runs
on IBM PC/AT's and cost $7500.00. I could not find an explanation of
where AI was used, in spite of the title of the article.
%T Expert System Moves Into Military Cockpit
%J Electronics
%V 58
%N 51
%P 15
%D DEC 23, 1985
%K AA18 Air Force Wright Aeronautical Laboratory Threat Expert Analysis System
AI01
%X The Air Force's Wright Aeronautical Laboratory, Wright-Patterson Air Force
Base had set a deadline of January 10 for the Threat Expert Analysis System,
a system that would warn pilots of enemy threats and recommend
possible responses.
%T Device Mixes Images from Eight Cameras
%J Electronics
%V 58
%N 51
%P 76
%D DEC 23, 1985
%K Pattern Processing Technologies Framesplitter AI06
%X Framesplitter is a system that combines the input from several
solid state video cameras into a single composite image. This system
allows a system to gain a 360 degree view while only processing one image.
%A Clifford Barney
%T Language Boils Down to Boolean Expressions
%J Electronics
%V 58
%N 51
%P 25-26
%D DEC 23, 1985
%K G. Spencer-Brown Wittgenstein Bertrand Russel Laws of Form
Advanced Decision Systems Air Force pictorial logic canonical forms Losp
Symbolics AI10 AI14 AA18 H02 T01 T02
%X Losp is a system based on the "Laws of Form" which was developed
by G. Spencer-Brown a British Mathematician who studied with
Bertrand Wittgenstein. The system was developed by Advanced Decision Systems
and will be put to use in an Air Force project on pictorial logic.
The language is being microcoded to run on a Symbolics work station.
Lisp and Prolog will be translated to LOSP
------------------------------
End of AIList Digest
********************
∂13-Apr-86 0350 LAWS@SRI-AI.ARPA AIList Digest V4 #84
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Apr 86 03:50:24 PST
Date: Sat 12 Apr 1986 23:42-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #84
To: AIList@SRI-AI
AIList Digest Sunday, 13 Apr 1986 Volume 4 : Issue 84
Today's Topics:
Bibliography - Recent Articles #2
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Articles #2
%A Clifford Barney
%T Expert Systems Makes it Easy to Fix Instruments
%J Electronics
%V 58
%N 51
%D DEC 23, 1985
%P 26
%K AI01 AA04 AA21 Ada Lockheed Missiles and Space Lockheed Expert
System
%X Lockheed Missiles and Space has developed a generic expert system
to assist in repairing and calibrating 55000 instruments.
This system has been used successfully on a Hewlett-Packard 6130C
digital voltage source. The epxert system was written in ADA. The
system is being applied to 20 different systems including signal-switching
and computer aided design.
%A Robert T. Gallagher
%T French Make Retools to Fight the Japanese
%J Electronics
%V 58
%N 51
%P 26-28
%D DEC 23, 1985
%K AA05 GA03 RTC La Radiotechnique Compelec cathode-ray tube AI07
%X RTC La Radiotechnique Compelec has converted a cathode-ray tube
to robotics. Robots are being used to place the
luminescent materials on the tube screens, testing, and placement
in packing materials. The areas requiring manual work are
fitting the shadow masks on to the CRT's frames and the final test
where tuning is done.
%A H. Berghel
%T Spelling Verification in Prolog
%J SIGPLAN Notices
%V 21
%N 1
%D JAN 1986
%P 19-27
%K T02
%X describes a system to check words against table and if mispelled to
suggest possible correct spellings.
%A D. Brand
%T On Typing in Prolog
%J SIGPLAN Notices
%V 21
%N 1
%D JAN 1986
%P 28-30
%K T02
%T Advertisement
%J BYTE
%D JAN 1986
%V 11
%N 1
%P 348
%K T01 T02 T03 H01 AT01
%X Price List on AI type products from The Programmers SHOP
800-421-8006 128B Rockland Street, Hanover MA 02339
.TS
tab(~);
l l l.
EXSYS~PCDOS~$359
INSIGHT 1~PCDOS~$95
INSIGHT 2~PCDOS~$449
APES~~$359
ADVISOR~~$949
ES Construction~~$100
ESP~~ $845
Expert CHOICE~~$449
GC LISP (Large Model)~~$649
Compiler and LM Interpreter~~$1045
TLC LISP~CPM-86~$235
~MSDOS
Waltz LISP~CPM~$149
~MSDOS
ExperLisp~~$439
IQ LISP~~$155
TRANSLISP-PC~~$75
BYSO~~$125
MuLISP-86~$199
ARITY PROLOG
~Compiler~$1950
~MSDOS~$495
MPROLOG~PCDOS~~$725
PROLOG-1~~$359
PROLOG-2~~$1849
MicroProlog~~$229
Prof. Micro Prolog3~~$359
.TE
%A Hugh Aldersey-Williams
%T Computer Eyes Turn to Food
%J High Technology
%D JAN 1986
%P 66-67
%V 6
%N 1
%K Vision Systems International Nello Zuech Roger Brook strawberry
citrus juice mixed vegetables AI06
%X At the University of Florida, Gainseville, they are working on a
vision system to pick citrus fruit when it is ripe. Arthur D. Little
is working on a sytem that would determine whether the mixture in a package
of mixed vegetables contains the correct proportion of different ingredients.
%T International Robomation Gets Two Million in Orders
%J Electronic News
%D MAR 3, 1986
%P 46
%V 32
%N 1591
%K AI07 AA04 Chrysler AT&T Hewlett-Packard Zenith SMD printed-circuit
board solder paste
%X Orders included $750,000 from Chrysler for a surface mounted device
inspection system, $400,000 from HP, $270,000 for inspection of through-hole
components, $400,000 from AT&T and $305,000 from Zenith for high-through
put SMD inspection
%T Asahi to Market Lincoln Inspector
%J Electronic News
%D MAR 3, 1986
%P 46
%V 32
%N 1591
%K AA04 AI06 Lincoln Laser GA01 GA02
%X Asahi Optical Company has agreed to market Lincoln Laser Co's line
of automatic optical inspection systems in Japan. Lincoln Laser plans
to manufacture the equipment in Japan. First year sales are projected
to be 40 systems valued at approximately $16.7 million.
%A Peggy Watt
%T Expert System: Boeing AI Academy Schools In-House Talent
%J ComputerWorld
%D MAR 3, 1986
%V 20
%N 9
%P 1+
%K AI01 AT18 AA18 Janusz S. Kowalik connectors AA04 space station.
%X U. S. Department of Defense announced in 1981 that artificial intelligence
will be a requisite in defense contract bids in the late 1980's. Boeing
Computer Services established an Artificial Intelligence Support
Center which graduates associates after a year of training including
developing a project of relevance to Boeing. The system acommodates
20 people in two classes scheduled each year and receives inquiries from
40 people a year out of 106,000 total employees of Boeing. They are
developing an expert system for process specs for connector assemblies.
It recommends actions in about 60% of the situations it encounters. It
runs in Prolog on a DEC VAX. They are developing an expert system
to monitor space-station cabin environment changes. Also developed
are systems for airplane part design maintenance and diagnosis.
Another helps determine air resistance assists the aerodynamicist in defining
and evaluating parameters.
%T Top of the News
%J ComputerWorld
%D MAR 3, 1986
%V 20
%N 9
%P 1+
%K Kurzweill Applied Intelligence Voice Writer AI05
%X "Kurzweil Applied Intelligence, Inc.'s Voice Writer, a voice-recognition
word processing device that will handle discrete, noncontinuous speech
at up to 60 words per minute, is on track for a third-quarter introduction,
inventor Raymond Kurzweil disclosed last week." It will support between
5000 and 10000 words and will cost under $20,000.
%T Borland Enters AI Arena with Turbo Prolog Development Tool
%J ComputerWorld
%D MAR 3, 1986
%V 20
%N 9
%P 14
%K Turbo-Prolog Borland International T02 H01
%X Turbo-Prolog costs $99.95. It has an incremental compiler that
generates native code and linkable object modules compatible with the
IBM MS-DOS linker. It includes a full screen editor,
pull-down menus, graphical and text-based windows. It will be available
April 25. The next version of Turbo Pascal will be able to exchange information
with Turbo Prolog. The system runs at 100,000 LIPS.
%T New Products/Microcomputers
%J ComputerWorld
%D MAR 3, 1986
%V 20
%N 9
%K OPS-83 Production Systems Technologies T03 H01
%X Production Systems Technologies, Inc. has announced that OPS83 is
now available for use on the IBM PC. It costs $1950.00
%T New Products/Systems and Peripherals
%J ComputerWorld
%D MAR 3, 1986
%V 20
%N 9
%K Maxvideo Minvideo Datacube Multibus Addgen-1 frame store DSP Systems
AI06
%X Datacube Inc. has introduced Minivideo, a real time image
processing subsystem, and has added three modules to its MaxVideo product
line. Minvideo-10 and Minvideo 7 are 8-bit 512 by 512 and 384 by 512 boards
for Intel Multibus or IIbx based computers.
X
DSP system has announced a FRAME STORE that can store a snap shot
of 50 Mhz data. It can store up to 32K 16 bit words
%A Gadi Kaplan
%T Industrial Electronics
%J IEEE Spectrum
%V 23
%N 1
%D JAN 1986
%P 61-64
%K Fujitsu General Electric process control Foxboro Farot-M6 AI06 AA05 AI07
counterweights
%X GE has developed a system that can weld using inert gas at 40 mm per second
or about twice the rate of any other system. It uses a vision system.
General Electric has developed an expert system tool called GEN-X.
Foxboro announced controllers with 200 rules. Japanese manufacturers last year
made 50,000 industrial robots valued at 1.2 billion. Fujitsu expects to
sell $2.1 billion in Japanese industrial robots and $4 billion in 2000 years.
Japanese auto manufacturers buy 40 percent of the robots produced. Farot
M6 robots made by Fujitsu have two arms which can be worked in coordination.
Fujitsu has eliminated the need for counterweights and can place components
with 30 micrometer accuracy at speeds up to 2 meters per second.
%A Mark A. Fischeti
%A Glenn Zorpette
%T Power and Energy
%J IEEE Spectrum
%V 23
%N 1
%D JAN 1986
%K AA04 AI01 Westinghouse Electric Corporation nuclear power Babcock and Wilson
EG&G Idaho reactor
%X "Westinghouse Electric Corporation of Pittsburgh, PA offers
the Genaid diagnostic software package to monitor changing conditions
in power plant generators, analyze them, and warn plant operators of
potential trouble." EG&G Idaho of Idaho Falls has a Reactor Safety Assessment
system which "processes large amounts of data from a nuclear power
plant during an emergency, makes diagnoses, and outliens the consequences of
subsequent actions. After final refinements, this expert system program
is to go on line this year at he Nuclear Regulatory Commison's
Operations Center in Washington Center. The system was
developed for use with Babcock and Wilcox Pressurized-water reactors and
will be adapted for use with other reactors." [In Spang-Robinson
report, they indicated that the Japanese are putting major amounts
of money into expert systems for nuclear reactor operations. See my
summary for more info. LEFF ]
%A Richard Brandt
%T Micromechanics: The Eyes and Ears of Tomorrow's Computers
%J BusinessWeek
%D MAR 17, 1986
%P 88-89
%N 2937
%K AI07 AI06 signature verification Novasensor Schlumberger
diabetes insulin Clini-Therm Corporation NEC Solartron Transducer
Hiroshi Tanigawa
%X Micromechanics, the making of mechanical sensors completely out of
semiconductors, is a $250,000,000 business. Europe is increasing
its market share. The most widely used devices are pressure sensors
with a silicon chip with a hole etched nearly through it leaving
a thin membrane. Hitachi sells about a million of such sensors
per year which it sells at ten dollars a piece. Millar
Instruments puts such sensors at the end of a blood pressure monitor to
take readings inside a blood vessel. Researchers
at MIT are working on a system that will translate nerve impulses
into controls for prosthetics. The MIT team anticipates the first
tests on humans with three years. There are devices with a set of
diving boards for measuring accelerations. IBM is using such a device
in a pen to detect the hand motions in writing a signature. This
data is analyzed to determine if there is a forger.
Texas Instruments is perfecting a silicon chip
with one million mirrors for use in optical computing.
%T AI to Dominate Optics Symposium
%J Electronics
%D MAR 3, 1986
%P 70
%V 59
%N 9
%K George Gilmore AI06 evidencing AA18
%X Discussion of the Society of Photooptical Instrumentation
Engineers Symposium on Optics symposium on Applications
of Artificial Intelligence III.
%A Alice LaPlante
%T Stock Market Finds AI Attractive Buy
%J InfoWorld
%V 8
%N 9
%D MAR 3, 1986
%K Teknowledge Harvey Newquist Intellicorp AT16
%X Discussions of public offerings
of Teknowledge's new public offering.
%A Ivars Peterson
%T Computing Art
%J Science News
%V 129
%P 138-140
%N 9
%D MAR 1, 1986
%K Richard Diebenkorn Frank Lloyd Wright Architecture grammar art
architecture Russell Kirsch Joan Marvin Minsky AA25
%X Using a grammar, scientists have developed
grammars for Frank Lloyd Wright's architecture and Richard Diebenkorn's
"Ocean Park" canvasses. These have been used to develop works
that appeared to by the author. Diebenkorn when shown the works said
"I looked and felt immediate recognition."
%A Scott Mace
%T Microrim Team To Study Data Management
%J Infoworld
%D March 10, 1986
%V 8
%N 10
%K AA09
%X Microrim is setting a R&D group to exploit what it calls
a 'potentially revolutionary' technology for making database
management easier. [Microrim makes RBASE database products
for microcomputers and CLOUT, a natural language interface.]
%A Karen Sorensen
%T Scientific Application for Expert System in Works
%J Infoworld
%D March 10, 1986
%V 8
%N 10
%K gas chromatography Award Software AI01 AA02 Award Software C H01
%X Award Software is developing an expert system
for making identifications of chemical substances. It is designed for
use with gas chromatography. They are using C to develop the
software.
%T Infomarket
%J Infoworld
%D March 10, 1986
%V 8
%N 10
%K H01 T03 Intelligent Machine Co. Knowledge Oriented Language
Knowol Rock Mountain Medical Software HouseCall AI01 AA01
%X Intelligent Machine Co is advertising The Knowledge Oriented
Language for $39.95. HouseCall is a home medical system
which can make over 400 diagnoses. It costs $49.95 and runs
on IBM PC's and Apples
%A Daniel R. Pfau
%A Barry A. Zack
%T Understanding Expert System Shells
%J Computerworld Focus
%D February 19, 1986
%V 20
%N 7A
%K T03
%P 23-24
%A Girish Parikh
%T Restructuring Your Cobol Programs
%J Computerworld Focus
%D February 19, 1986
%V 20
%N 7A
%P 39-42
%K AI01 AA08 Cobol-SF
%A Elisabeth Horwitt
%T LISP Systems Tied to SNA
%J ComputerWorld
%D MAR 10, 1986
%V 20
%N 10
%P 1
%K Symbolics H02 AA06 CICS IBM
%X Symbolics introduced a product to allow their Symbolics 3600's
to communicates via SNA. They also provide an interface
to use CICS to access VSAM files. The hardware + software
costs $17,900 for the first Symbolics and $4900 for each additional
Symbolics or IBM.
%A Eric Bender
%T The Concerted Kurzweil Effort
%J ComputerWorld
%D MAR 10, 1986
%V 20
%N 10
%P 33+
%K Voice Writer AI05
%X describes demonstration of Kurzweill's add on for the IBM PC to
do speech recognition. Kurzweill will be selling a Voice
Writer which will handle 5000 words and allow eight users. It uses
parallel processing to accept dictation at 60 words per minute.
%T New Products/Microcomputers
%J ComputerWorld
%D MAR 10, 1986
%V 20
%N 10
%P 81
%K AA08 H01
%X P-Cube Corb has annoucned Mansys/IRM a "knowledge-based"
system to help assess the quality of the procedures and processes
within an information systems department. It costs $1800 and runs
on IBM PC's.
%T Spin-Offs
%J IEEE Spectrum
%D MAR 1986
%V 23
%N 3
%P 17
%K Color Systems Technology colorization AA25
%X describes the system used to color old movies.
%A Ernest W. Kent
%A Michael O. Shneier
%T Eyes for Automatons
%J IEEE Spectrum
%D MAR 1986
%V 23
%N 3
%P 37
%K Honeywell Navy AI06 AI07 cleaning agriculture printed circuit-board
propeller CAD/CAM Control Automation Interscan Odetics range images
Environmental REsearch Institute AA18 AA19 Automatix Advanced Vision
Systems ITMI Marketing Corp Analog Devices Automation Intelligence
%X At Honeywell, they are using a vision system to identify missing
leads in printed-circuit boards. It uses four videocamera's 90 degrees
apart to capture light reflections. There are 200 companies offering
a product or service related to machine-vision. 50 of these
offer complete systems. At a Navy ship-building system, a video
system inspects propellors and compares the results against the
CAD/CAM database to see if it was made possible. Autonomous
mobile robots are under commercial development for
materials transport, commercial cleaning, and construction.
Total sales for machine vision systems have double in each of the last
two years.
%A Mark A. Fischetti
%T A Review of Progress at MCC
%J IEEE Spectrum
%D MAR 1986
%V 23
%N 3
%P 76-82
%K AA04 VLSI-CAD reconvergent fanout problem H02 LDL AI10 T02 AA09
%X In the VLSI-CAD, area, they are using 81 LISP machines.
They developed a module editor which lays out circuitry
graphically. They have developed an algorithm for solving
the reconvergent fanout problem.
Discusses the knowledge-base that is supposed to contain
"common-sense" They have developed a test application that will
help IC chip designers. The database group is developing
a system to compile large logic systems on disks
%A Glenn Zorpette
%T Robots for Fun and Profit
%J IEEE Spectrum
%D MAR 1986
%V 23
%N 3
%P 71-75
%K AA25 AI07 Survival Research Labs
%X discusses various robots that are part of art shows or used
for entertainment. Survival Research Labs puts on
demonstrations where large mobile robots destroy props, animal
carcasses or one another.
%T Gould Acquires Vision Systems Unit
%J Electronic News
%V 32
%N 1592
%D MAR 10, 1986
%P 14
%K Gould Automated Intelligence Opti-Vision AI06 AT16
%X Gould has acquired the VisionSystems division of
Automated Intelligence. This division makes the Opti-Vision system.
------------------------------
End of AIList Digest
********************
∂13-Apr-86 0519 LAWS@SRI-AI.ARPA AIList Digest V4 #85
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Apr 86 05:19:07 PST
Date: Sat 12 Apr 1986 23:45-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #85
To: AIList@SRI-AI
AIList Digest Sunday, 13 Apr 1986 Volume 4 : Issue 85
Today's Topics:
Bibliography - Recent Articles #3
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Articles #3
Definitions
D BOOK24 The World Yearbook of Robotics Research and Development, 1985\
%I Gale Research Corporation\
%D 1985
D MAG14 Computer-Aided Design\
%V 17\
%N 9\
%D NOV 1985
D MAG15 Theoretical Computer Science\
%V 39\
%N 2-3\
%D AUG 1985
D BOOK25 Analysis of Concurrent Systems\
%E B. T. Denvir\
%E W. T. Harwood\
%E M. I. Jackson\
%E M. J. Wray\
%S Lecture Notes in Computer Science\
%V 207\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D MAG16 Soviet Journal of Computer and Systems Sciences\
%V 23\
%N 4\
%D JUL-AUG 1985
D MAG17 International Journal of Man-Machine Studies\
%V 23\
%N 5\
%D NOV 1985
D MAG18 Cybernetics and Systems\
%V 16\
%N 1\
%D 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
%T Intelligent Robots and Computer Vision
%I SPIE -- The International Society for Optical Engineering
%D September 16-20, 1985
%N 579
%C Cambridge, MA
%E David P. Casasent
%K AT15 AI07 AI06
%A David Nitzan
%T Development of Intelligent Robots: Achievements and Issues
%B BOOK24
%K AI07
%A Ray Basey
%T Training for the Introduction of Robots New Technology and Control Systems,
Operation and Maintenance
%B BOOK24
%K AI07 AT18
%A H. H. Rosenbrook
%T Social and Engineering Design of a Flexible Manufacturing System
%B BOOK24
%K AI07 AA05 O05
%A Igor Aleksander
%T Extension of Robot Capabilities Through Artificial Vision: A Look into
the Future
%B BOOK24
%K AI07 AI06
%T The World Directory of Robotics and Development Activities
%B BOOK24
%K AI07 AT19
%X info on robotics research in 26 countries, list of groups
%T A Guide to Grant Awarding Bodies
%B BOOK24
%K AI07 AT19
%A Phillippe Coiffet
%T Robot Technology: Modeling and Control
%V 1
%I Prentice-Hall
%D 1982
%K AI07 AT15
%A Philippe Coiffet
%T Robot Technology: Interaction with the Environment
%V 2
%I Prentice-Hall
%D 1983
%K AI07 AT15
%A Jean Vertut
%A Philippe Coiffet
%T Robot Technology: Teleoperation and Robotics: Evolution and Development
%V 3A
%I Prentice-Hall
%D 1986
%K AI07 AT15 AT20
%A Jean Vertut
%A Philippe Coiffet
%T Teleoperations and Robotics: Applications and Technology
%V 3B
%I Prentice-Hall
%D 1985
%K AI07 AT15
%A F. L. Hote
%T Robot Components
%V 4
%I Prentice-Hall
%D 1983
%K AI07 AT15
%A Michel Parent
%A Claude Laureau
%T Robot Technology: Logic and Programming
%I Prentice-Hall
%D 1985
%V 5
%K AI07 AT15
%T Robot Technology: Decision and Intelligence
%I Prentice-Hall
%D (not yet published)
%K AI07 AT15
%V 6
%A Alain Liegeois
%T Robot Technology: Performance and Computer-Aided Design
%I Prentice-Hall
%D 1985
%K AI07 AT15 AA05
%V 7
%A John Haugeland
%T Artificial Intelligence: The Very Idea, 1985
%I MIT Press
%D 1985
%K AT15
%A H. J. De Man
%A I. Bolsens
%A E. vanden Meersch
%A J. van Cleynenbreugel
%T DIALOG: An Expert Debugging System for MOS VLSI Design
%J IEEE Transactions on Computer-Aided Design
%D JULY 1985
%V CAD-4
%N 3
%P 303-311
%K AI01 AA04
%A Michael A. Rosenman
%A John S. Gero
%T Design Codes as Expert Systems
%J MAG14
%P 399-409
%K AA05 AI01
%A Hitoshi Furuta
%A King-Sun Tu
%A James T. P. Yao
%T Structural Engineering Applications of Expert Systems
%J MAG14
%P 410-19
%K AA05 AI01
%A Mary Lou Maher
%T HI-RISE and Beyond: Directions for Expert Systems in Design
%J MAG14
%P 420-427
%K AA05 AI01
%A A. D. Radford
%A J. S. Gero
%T Towards Generative Expert Systems for Architectural Detailing
%J MAG14
%P 428-435
%K AA05 AI01
%A David C. Brown
%T Failure Handling in A Design Expert System
%J MAG14
%P 436-442
%K AA05 AI01
%A Daniel R. Rehak
%A H. Craig Howard
%T INterfacing Expert Systems with Design Databases in Integrated
CAD Systems
%J MAG14
%P 443-454
%K AA05 AI01
%A Anna Hart
%T Knowledge Elicitation: Issues and Methods
%J MAG14
%P 455-462
%A John S. Gero
%T Bibliography of Books on Artificial Intelligence with
Particular Reference to Expert Systems and Knowledge Engineering
%J MAG14
%P 463-464
%K AI01 AT09
%A D. Kapur
%A P. Narendran
%A M. S. Krishnamoorthy
%A R. McNaughton
%T The Church-Rosser Property and Special Thue Systems
%J MAG15
%P 123-134
%K AI14
%A C. Bohm
%A A. Berarducci
%T Autoamtic Snythesis of Type Lambda-Programs on Term Algebras
%J MAG15
%P 135-154
%K AI14 AA08
%A M. W. Bunder
%T An Exptension of Klop's Counterexample to the Church-Rosser Property
to Lambda-Calculus with Other Ordered Pair Combinators
%J MAG15
%P 337
%K AI14
%A M. Rodriguez artalejo
%T Some Questions About Expessiveness and Relative Completeness in Hoare's
Logic
%J MAG15
%P 189-206
%K AA08
%T The Functions of T and Nil in Lisp
%J Software Practice and Experience
%V 16
%N 1
%D JAN 1986
%P 1-4
%K T01
%A R. Milner
%T Using Algebra for Concurrency-Some Approaches
%B BOOK25
%P 7-25
%K AA08
%A H. Barringer
%A R. Kuiper
%T Towards the Hierarchical, Temproral Logic, Specification of
Concurrent Systems
%B BOOK25
%P 157-183
%K AA08
%A R. Koymans
%A W. P. Deroever
%T Examples of a Real-Time Temporal Logic Specification
%B BOOK25
%P 231-251
%K AA08
%A V. S. Medovyy
%T Translation from a Natural Language into a Formalized Language as a
Heuristic Search Problem
%J MAG16
%P 1-9
%K AI02 AI03
%A M. K. Valiyev
%T On Temporal Dependencies in Databases
%J MAG16
%P 10-17
%K AA09
%A Z. M. Kanevskiy
%A V. P. LItvinenko
%T Minimization of the Average Duration of a Discrete Search Procedure
%J MAG16
%P 126-129
%K AI03
%A A. S. Yuschenko
%T The Problem of Dynamic Control of Manipulators
%J MAG16
%P 139
%K AI07
%A I. Vessey
%T Expertise in Debugging Computer Programs - A Process Analysis
%J MAG17
%P 459-494
%K AA08 AI08
%A J. H. Boose
%T A Knowledge Acquisition Program for Expert Systems Based on Personal
Construct Psychology
%J MAG17
%P 495-526
%K AI01
%A E. J. Weiner
%T Solving the Containment Problem for Figurative Language
%J MAG17
%P 527-538
%K AI02
%A R. R. Yager
%T Explantory Models in Expert Systems
%J MAG17
%P 539-550
%K AI01
%A T. Munakata
%T Knowledge-Based Systems for Genetics
%J MAG17
%P 551-562
%K AI01 AA10
%A Ronald R. Yager
%T On the Relationship of Methods of Aggregating Evidence in Expert Systems
%J MAG18
%P 1-22
%K AI01
%A Ronald R. Yager
%T Strong Truth and Rules of INference in Fuzzy Logic and
Approximate Reasoning
%J MAG18
%P 23-64
%K AI01 O04
%A Witold Pedrycz
%T Structured Fuzzy Models
%J MAG18
%P 103
%K O04
------------------------------
End of AIList Digest
********************
∂13-Apr-86 2304 LAWS@SRI-AI.ARPA AIList Digest V4 #86
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 13 Apr 86 23:04:07 PST
Date: Sun 13 Apr 1986 20:05-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #86
To: AIList@SRI-AI
AIList Digest Monday, 14 Apr 1986 Volume 4 : Issue 86
Today's Topics:
Queries - String Reduction & Imagen Support,
Logic & Linguistics - Michael Moss Collection,
Speech - Expert Conversationalist,
Brain Theory - Comments on Kort's Article
----------------------------------------------------------------------
Date: 10 Apr 86 22:04:26 GMT
From: allegra!mit-eddie!think!harvard!seismo!mcvax!ukc!sjl@ucbvax.berkeley.edu
(S.J.Leviseur)
Subject: String reduction
Does anybody have any references to articles on string reduction
as a reduction technique for applicative languages (or anything
else)? They seem to be almost impossible to find! Anything welcome.
Thanks
sean
sjl@ukc.ac.uk
sjl@ukc.uucp
sjl%ukc@ucl-cs.edu
------------------------------
Date: Fri, 11 Apr 86 17:28:25 EST
From: "Srinivasan Krishnamurthy" <1438@NJIT-EIES.MAILNET>
Subject: Vendor Support: SYMBOLICS
I need vendor support information for IMAGEN Laser Printer on
SYMBOLICS 3640. Any pointers will be greatly appreciated.
Please send mail to the address given below:
Net: Srini%NJIT-EIES.MAILNET@MIT-MULTICS.ARPA
Thanks.
Srini.
------------------------------
Date: 10 April 1986 2355-EST
From: Es Library@A.CS.CMU.EDU
Subject: E&S Library news
[Forwarded from the CMU bboard by Laws@SRI-AI.]
** The CMU Library system has purchased the science library of the late
Michael Moss in England. The collection consists of over 11 thousand
volumes, mostly in Logic, Linguistics and Philosophy of Science and of
Language. After many months of efforts on the part of a number of
people in the administration and in the CS and Philosophy departments,
the collection will be shipped from England later this week.
The Moss Collection will substantially strengthen the library services
to the newly created Philosophy Department and Program in Computational
Linguistics, and will complete, and go beyond, the rebuilding of the
Logic collection, of which much was vandalized a few years ago.
[...]
-- Daniel Leivant
------------------------------
Date: 11 Apr 86 00:06:36 GMT
From: tektronix!uw-beaver!ssc-vax!bcsaic!michaelm@ucbvax.berkeley.edu
(michael maxwell)
Subject: Expert conversationalist :-)
One of the problems in AI, specifically in the field of natural language, has
been the problem of endowing an artificially intelligent program with the
ability to converse in an intelligent manner, observing such principals of
conversation as turn taking, empathy with the conversational partner, etc.
The problem has been solved! I recently saw in action an expert
conversationalist at a local store. The AI program was cleverly disguised as
a stuffed bird. However, when you talk to it, it talks back (in a bird
language; doubtless an English language program will soon be out, as the
chances for making large profits would seem to be much greater.)
Before you dismiss this as a simple case of an electronic box that beeps
when it detects a sound, let me tell you about some of its capabilities.
First, it demonstrates true turn-taking abilities. It does *not* simply
listen for sounds and beep back; rather, it waits until you are done talking,
and then responds.
Second, it does *not* simply beep back; rather, it tailors its response to you.
If you talk in an excited voice, it responds in an excited voice; if you
talk calmly, it uses a much more subdued response. It tailors both its pitch
and speed of speech to your mood as well. Genuine empathy!
Think of the possibilities; you could hook it up in place of your phone
answering machine to respond to all the carpet cleaning, chimney sweeping,
and donate-to-charity-X calls that you get! More relevant to this net, you
could hook up your favorite implementation of Eliza + a speech generation
device, and have a true Rogerian psychologist at your beck and call. I think
I'll buy some stock in this company...
--
Mike Maxwell
Boeing Artificial Intelligence Center
...uw-beaver!uw-june!bcsaic!michaelm
------------------------------
Date: 9 Apr 86 03:08:09 GMT
From: tektronix!orca!tekecs!mikes@ucbvax.berkeley.edu (Michael Sellers)
Subject: Re: Computer Dialogue and *bigtime misinformation* (kind of long)
> Joseph Mankoski writes a thought provoking article on whether
> survival logic in NASA computers has any connection to human
> survival instincts wired into to our brains from birth.
I would have to agree there is *some* connection, but I would put it
on the level of the survival instincts of planaria or hydra, not (even
infant) humans. We are much, much more complex than that.
> I have been pondering this question myself. [...]
>
> Joseph asks for a theory of feelings. As it happens, I just wrote
> a brief article on the subject, which may or may not be suitable
> for publication after editorial comment and revision. Just for
> the hell of it, let me append the article and solicit comments
> from netters interested in this topic.
Okay, here goes. Its real difficult for me to keep this from becoming a
big flame-out (this is the second time I've tried to respond; this time I
won't kill it before it posts). While I'm sure Barry meant well (unless this
is just badly written satire -- which I would find hard to believe), the
following article is basically a big pile of misinformation and what seems
to be idle conjecture. The "acknowledgements" at the end only serve to give
this article a legitimacy it does not deserve by claiming nonspecific sources
and thanking PhD types (what are these folks doctors *of*, Barry?). This isn't
meant as a personal flame; its just that I've seen so much hype/misinformation/
crapola about the brain/mind/AI recently that when I saw this I couldn't keep
quiet. It is possible, I suppose, that a large part of neuroscience has
completely turned around in the last six months or so...but I doubt it. If
this is true, please excuse my comments as the ravings of an old-worlder.
Of course, I'd have to see your sources before believing you. I'd be *glad*
to refer you to mine.
For brevity (ha!), I haven't re-posted Barry's entire article; nor have I
noted/flamed all the things I found objectionable. Some of the assertions
in this article, however, could not be ignored.
> ==================== Article on Feelings ========================
>
> A Simplified Model of the Effects of Perceived Aggression
> in the Work Environment
>
> Barry Kort
>
> Copyright 1986
>
> Introduction
>
> [...]
>
> The effects that I wish to investigate are not the
> behavioral responses, but the more fundamental internal body
> sensations or somatic reactions which lie behind the
> subsequent behavioral response. [...]
>
> A Model of Nature of Aggressive Behavior
>
> It has been said that civilization is a thin veneer.
> Underneath our legacy of some 5000 years of civilization
> lies our evolutionary past. Deep within the human brain one
> can find the vestiges of our animal nature-the old mammalian
> brain, the old reptilian brain. Of principal interest here
> are two groups of structures responsible for much of our
> "wired-in" instincts.
Not quite instincts. The basal ganglia, which make up most of what is
sometimes called the old mammalian brain (itself enclosing what some call
the R-complex, or reptilian brain) mainly govern biological drives and needs
such as hunger, thirst, sex, etc. The main governor of these is the
hypothalamus, working in tandem with the pituitary. The thalamus, amygdala,
and several other nuclei also contribute to these drives, and have some part
in our emotional responses including fear, anger, happiness, nervousness, etc.
But these are not insticts nor instinctive.
> The cerebellum is responsible for much of our risk-taking,
> self-gratifying drives, including the aggressive sex drives.
> It is the cerebellum that says, "Go for it! This could be
> exciting! Damn the torpedoes, full speed ahead."
I couldn't believe this when I first read it. I would still like to believe
that Barry mistyped or misread this. The cerebellum (the wrinkled-looking
thing that hangs under the back of the cerebrum) play *ABSOLUTELY NO PART* in
our rational, cognitive, or emotive behavior!!! What it does do is play a
major role in coordinating complex motor actions, such as tying your shoes or
dancing the foxtrot (especially, it seems, learned and often repeated actions
such as these, as opposed to one-time actions like climbing a tree). I can't
imagine where you got this piece of information, Barry. It sounds like it came
out of Nat'l Enquirer University. The aforementioned hypothalamus does play
a large part in assertive or aggressive action, though this is mediated by the
frontal and parietal portions of the cortex and the amygdala and caudate nuclei
(in case you wanted to know :-).
> The limbic system, on the other hand, is responsible for
> self-protective behaviors. The limbic system perceives the
> threats to one's safety or well-being, and initiates
> protective or counter measures. The limbic system says,
> "Hold it! This could be dangerous! We'd better go slow and
> avoid those torpedoes."
This rates most of my paragraph above. I've never seen anything about
cautious behaviors arising in the limbic system, though I know of no reason
why some components of such behavior couldn't begin there. The level of
behavior suggested here is way to complex for this stage of the processing.
Cognitive overlays of our internal biochemical states make up the majority of
what we perceive as emotional states/responses.
> Rising above it all resides the neocortex or cerebrum. This
> is the "new brain" of homo sapiens which is the seat of
> learning and intelligence. It is the part that gains
> knowledge of cause and effect patterns, and overrules the
> myopic attitude of the cerebellum and limbic system.
> -> Occasionally, the cerebral cortex is faced with a novel
> | situation, where past experience and learning fail to
> | provide adequate instruction in how to proceed. In that
> | case, the usual patterns of regulation are ineffective,
> | and the behavioral response may revert back to the more
> | primitive instincts.
|
This is an interesting piece of conjecture, and one I've not seen recently.
It doesn't seem to likely, however, since we have (evolutionarily) paid dearly
for our enlarged cortices. Why would we throw out all our observational &
computational power just because a situation doesn't match any previously
encountered? This would seem to be a marvelous lack of a very valuable
resource. It is likely that when the perceived danger or novelty of a
situation is *too* great that all our finely-tuned observational and learning
powers are thrown out the window in favor of old tired-and-true methods, but
this is not as general as is stated here.
> [...]
>
> Somatic Reactions to Stress
>
> When an individual is presented with an unusual situation,
> the lack of an immediately obvious method of dealing with it
> may lead to an accumulation of stress which manifests itself
> somatically. For instance, first-time jitters may show up
> as a knotting of the stomach (butterflies), signaling fear
> (of failure). A perceived threat may cause increased heart
> rate, sweating, or a tightening of the skin on the back of
> the neck. (This latter phenomenon is commonly known as
> "raising of one's hackles," which in birds, causes the
> feathers to stand up in display mode, warning off the
> threatening invader.) Teeth clenching, which comes from
> repressing the urge to express anger, leads to a common
> affliction among adult males-temporal mandibular joint
> (TMJ). Leg shaking and pacing indicate a subliminal urge to
> flee, while cold feet corresponds to frozen terror (playing
> 'possum). All of these are variations on the
> fight/flight/freeze instincts mediated by the limbic system.
> They often occur without our conscious awareness.
These are also manifestations of the activation of the sympathetic nervous
system, probably by the release of epinephrine (adrenalin) into the blood-
stream. This can occur with a variety of different emotions, and is much
less specific than we are led to believe here. (The use of analogies from
biology and the use of an acronym also bug me in this context, since they
also seem to lend legitimacy to what is a not very well thought out
supposition.) All of these are the result of bloodflow being directed away
from non-vital areas (digestive tract, extremities -- butterflies and cold
feet) and toward more vital areas (head and muscles -- facial flush, leg
shaking, etc) in addition to other secondary effects of the adrenaline
(increased heart/respiration rate, sweating, skin tightening).
> [...] A person's awareness of and
> sensitivity to such somatic feelings may affect his mode of
> expression. The somasthetic cortex is the portion of the
> brain where the body stresses are registered, and this
> sensation may be the primary indication that a stressor is
> present in the environment. A challenge for every
> individual is to accurately identify which environmental
> stimulus is linked to which somatic response.
The somasthetic [portion of the] cortex does more than register body
stresses. This is the area where *all* sensory input for the body surfaces
is perceived. While stressors in the environment can have somatic effects,
these do not have a one-to-one (or even a few-to-a few) correspondence with
the area of the body or the type of response given. *ALL* stress, if it is
bad enough, will effect your body (I sometimes get the "runs" when things get
REAL bad), but this effect is not likely to be consistently manifested in one
part of your body or with one single reaction.
> Somatic responses such as those outlined above are
> intimately connected with our expressed feelings, which
> usually are translated into some behavioral response along
> the axis from aggressive to assertive to politic to
> nonassertive to nonaggresive.
This is incomplete at best. It is unrealistic to limit the translation of
somatic effects into one spectrum of behavioral states/effects, and vastly
oversimplifying the situation as well (some oversimplification is inevitable,
but not to the extent that you lose all informational content of the thought!).
> The challenge is to find and
> effectuate the middle ground between too much communication
> and too little. The goal of the communication is to
> identify the cause and effect link between the environmental
> stressor and the somatic reaction, and from the somatic
> reaction to the behavioral response. The challenge is all
> the more difficult because the most effective mode and
> intensity of the communication depends on the maturity of
> the other party.
This sounds to me for all the world like a paragraph off of the back of a
badly researched pop-psych book. I'm somewhat of theoretic conservative; I
don't like to see new and wild theories/models thrown around without proper
thought and research behind them. While the sentiment here seems to be good,
the assumptions and assertions are a mishmash of misinformation, hopeful
conjecture, and psych 101.
> Acknowledgements
>
> The original sources for the ideas assembled in this paper
> are too diffuse to pinpoint with completeness or precision.
> However, I would like to acknowledge the influence of so
> many of my colleagues who took the time to contribute their
> ideas and experiences on the subject matter. I especially
> would like to thank Dr. John Karlin, Dr. R. Isaac Evan, and
> Dr. Laura Rogers who helped me shape and test the models
> presented here.
Like I said, who are these folks, and what sort of feedback did they
give you? While I'm at it, is this article being published? If so, where,
and what editor let it pass by?!
> Comments are invited.
>
> --Barry Kort ...ihnp4!houxm!hounx!kort
Well, you asked. I'd be more than happy to hear any comments to my comments,
and/or to view any sources anyone has. I have them in abundance myself.
None of this has been intended as a personal flame. I am just speaking out
against what is a glaring example of some of the half-baked theories being
slung around today. If you want to attack *my* assertions, go ahead (I'm
sure there's room for everybody :-). All personal flames will be sent directly
to /dev/uranus without comment.
My address is ...ihnp4(etc)!tektronix!tekecs!mikes
Mike Sellers
"The strength and weakness of youth is that
it cannot see its own strength and weakness."
------------------------------
End of AIList Digest
********************
∂14-Apr-86 0117 LAWS@SRI-AI.ARPA AIList Digest V4 #87
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 Apr 86 01:17:09 PST
Date: Sun 13 Apr 1986 20:31-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #87
To: AIList@SRI-AI
AIList Digest Monday, 14 Apr 1986 Volume 4 : Issue 87
Today's Topics:
Philosophy - Wittgenstein & Computer Consciousness
----------------------------------------------------------------------
Date: 9 Apr 86 00:18:00 GMT
From: pur-ee!uiucdcs!uiucdcsp!bsmith@ucbvax.berkeley.edu
Subject: Re: Natural Language processing
You are probably correct in your belief that Wittgenstein is closer to
the truth than most current natural language programming. I also believe
it is impossible to go through Wittgenstein with a fine enough toothed
comb. However, there are a couple of things to say. First, it is
patently easier to implement a computer model based on 2-valued logic.
The Investigations have not yet found a universally acceptable
interpretation (or anything close, for that matter). To try to implement
the theories contained within would be a monumental task. Second, in
general it seems that much AI programming starts as an attempt to
codify a cognitive model. However, considering such things as grant
money and egos, when the system runs into trouble, an engineering-type
solution (ie, make it work) is usually chosen. The fact that progress
in AI is slow, and that the great philosophical theories have not yet
found their way into the "state of the art," is not surprising. But
give it time--philosophers have been working hard at it for 2500 years!
Barry Smith
------------------------------
Date: 8 Apr 86 00:32:00 GMT
From: ihnp4!inuxc!iubugs!iuvax!marek@ucbvax.berkeley.edu
Subject: Re: Natural Language processing
Interestingly enough, similar sentiments to your endorsment of L.W. are
strongly voiced with respect to Charles Sanders Peirce, by semioticians.
From what I can surmise about Pericean thought, their thrust (or, trust)
appears questionable. I am not implying that this necessarily casts a pall
on the Vienna School, but my present inclination is to read the Dead Greats
for inspiration, not vindication or ready-made answers.
-- Marek Lugowski
Indiana U. CS Dept.
Bloominton, Indiana 47405
marek@indiana.csnet
--------
``I mistrust all systematizers and avoid them. The will to a system is
a lack of integrity'' -- Friedrich Nietzsche (``Twilight of the Idols, or
How One Philosophizes with a Hammer'')
``Onwards, hammerheads, bright and dangerous, we're big and strong and
we're sure of something'' -- Shriekback (``Oil and Gold'')
------------------------------
Date: Sat, 12 Apr 86 23:06:42 est
From: Nigel Goddard <goddard@rochester.arpa>
Reply-to: goddard@rochester.UUCP (Nigel Goddard)
Subject: Re: computer consciousness
In article <8604110647.AA25206@ucbvax.berkeley.edu> "CUGINI, JOHN"
<cugini@nbs-vms.ARPA> writes:
>
>Thought I'd jump in here with a few points.
>
...
>
>3. Taking up the epistemological problem for the moment, it
>isn't as obvious as many assume that even the most sophisticated
>computer performance would constitute *decisive* evidence for
>consciousness. Briefly, we believe other people are conscious
>for TWO reasons: 1) they are capable of certain clever activities,
>like holding English conversations in real-time, and 2) they
>have brains, just like us, and each of us knows darn well that
>he/she is conscious. Clearly the brain causes/supports
>consciousness and external performance in ways we don't
>understand. A conversational computer does *not* have a brain;
>and so one of the two reasons we have for attributing
>consciousness to others does not hold.
>
It is not just having a brain (for which most of us have no direct evidence
anyway), but having a head, body, mouth, eyes, voice, emotional sensitivity
and many other supporting factors (no one of which is *necessary*, but the
more there are there the better the evidence). I guess a brain is necesary,
but were one to come across a brain with no body, eyes, voice, ears or other
means for verifying its activity, would one consider it to be conscious ?
Personally I think that the only practical criterion (i.e. the ones we use
when judging whether this particular human or robot is "conscious") are
performance ones. Is a monkey conscious ?. If not, why not ? There are
people I meet who I consider to be very "unconscious", i.e. their stated
explanations of their motives and actions seem to me to
completely misunderstand what I consider to be the
*real* explanations. Nevertheless, I still think they are conscious
entities, and the only way I can rationalize this paradox is that I think
they have the ability to learn to understand the *real* reasons for their
actions. This requires an ability to abstract and to make an internal model
of the self, which may be the main factors underlying what we call
consciousness.
Nigel Goddard
------------------------------
Date: 8 Apr 86 09:57:10 GMT
From: hplabs!qantel!lll-lcc!lll-crg!styx!lognet2!seismo!ll-xn!topaz!harvard
!h-sc1!pking@ucbvax.berkeley.edu
Subject: Re: Computer Dialogue
In all this discussion of "feelings," "survival instinct," and
"consciousness," one point is being overlooked. That is, can you
really say that a behavioral reaction (survival instinct) is a
feeling if the animal or computer has no consciousness?
Joseph Mankoski asked whether or not one could say that the
shuttle's computers were displaying a form of "programmed
survival instinct." I think that the answer is yes. This does
not mean that shuttle missions were aborted because the computer
wanted to save itself. Biologists, however, are quick to point
out that cats run away from dogs not because they want to save
themselves, but because the sight of a dog triggers a cat's
flight (abort) mechanism. The net effect of the cat's behavior
is to increase its chances of survival, but the cat (and the
shuttle's computer) has no "desire to survive."
But we, as humans, DO have a desire to survive, don't we? When
faced with danger, we do everything in our power to avoid it. The
difference is that we are conscious of our attempts to avoid
danger, even if we do not understand them. "Why did you run away
from that snake," someone might ask. "To escape possible
injury," we rationalize. The more truthful answer, however, is
"It just happened -- it was the first thing that came to mind."
But what of the sensation of fear that comes over us in such
situations? "Fear" is just a name we have given to the sensation
of anxiety coupled with avoidance-behavior. For the most part,
we are observers of our own behavior (and our own thoughts, for
that matter: introspection). Sure, we have control over our
instinctual tendencies, but not as much as we would like to
think. Witness the acrophobic "unable" to climb a fire-escape.
Why would courage be such an envied quality if it weren't so hard
to defeat one's instinctual (intuitive) reactions.
Unfortunately, gut-feeling tendencies can backfire, as in the
case of drug addiction. In this case, the emotional mind sets
the goal ("get drugs") and the rational mind does what it can to
get satiate the emotional mind despite knowledge of the damage
being done. Phobias aren't so desirable either.
What I'm getting at is that "desires" and "feelings" are how we
experience the state of our mind, just as colors are the way we
experience light frequency and pain is the way we experience
tissue damage. To say a computer has feelings is incorrect
unless the computer is AWARE of its behavior. You could possibly
say that the shuttle's computer aborted the mission to prevent
it's own death (i.e. it felt fear) if one of the sensory inputs
to the computer was the fact that it was entering the abort-
state.
The same argument could be made for consciousness. That to be
conscious is to be aware of one's own thought process and state
of mind (a sixth sense?). Computers (and Barry Kort's gigantic
telephone switching system) are not conscious. While they receive
input from the various "senses" (telephone exchanges, disk-
drives, users), they receive no information about themselves. One
could say that a time-sharing system that monitor its own status
is "conscious" but this is a very limited consciousness, since
the system cannot construct an abstract world-model that would
include itself, a requirement for personal identity.
If a computer could compile sensory information about itself and
the world around it into an abstract model of the "world," and
then use this model to interact with the world, then it would be
conscious. Further, if it could associate pieces of its model to
words, and words to a grammar, then it could communicate with
people and let us know "what it's like to be a computer."
-------
I would appreciate any reactions.
Paul King
UUCP: {seismo,harpo,ihnp4,linus,allegra,ut-sally}!harvard!h-sc4!pking
ARPA: pking@h-sc4.harvard.EDU
BITNET: pking@harvsc4.BITNET
------------------------------
Date: 9 Apr 86 23:18:21 GMT
From: decvax!linus!philabs!cmcl2!seismo!ll-xn!cit-vax!trent@ucbvax.berkeley
.edu (Ray Trent)
Subject: Re: Computer Dialogue
In article <1039@h-sc1.UUCP> pking@h-sc1.UUCP (paul king) writes:
>"consciousness," one point is being overlooked. That is, can you
>really say that a behavioral reaction (survival instinct) is a
>feeling if the animal or computer has no consciousness?
Please define this concept of "consciousness" before using it.
Please do so in a fashion that does not resort to saying that
human beings are mystically different from other animals or
machines. Please also avoid self-important definitions. (e.g.
consciousness is what humans have)
>is to increase its chances of survival, but the cat (and the
>shuttle's computer) has no "desire to survive."
The above request also applies to the term "desire".
>difference is that we are conscious of our attempts to avoid
...
>"It just happened -- it was the first thing that came to mind."
Huh? This pair of sentences seems to say that your definition of
"consciousness" is that consciousness is "the first thing that
[comes] to mind." I don't think that split second decisions are a
good measure of what most people call consciousness.
> [two paragraphs that seem to reinforce the idea that
> consciousness has much to do with "gut-level reactions" and
> "instincts"]
>What I'm getting at is that "desires" and "feelings" are how we
My definition of these concepts would say that they "are" the
actions that a life process take in response to certain stimuli.
>tissue damage. To say a computer has feelings is incorrect
>unless the computer is AWARE of its behavior. You could possibly
No, to say that a computer has self-awareness is to say that it
is AWARE of its feelings. Unless, of course, this is yet another
self-defined concept.
>say that the shuttle's computer aborted the mission to prevent
>it's own death (i.e. it felt fear) if one of the sensory inputs
>to the computer was the fact that it was entering the abort-
>state.
[reductio ad absurdum(sp?)] You could possibly say that a human
entered abort mode (felt fear) if one of its sensory inputs was
the fact that it was entering abort mode (feeling fear).
>telephone switching system) are not conscious. While they receive
>input from the various "senses" (telephone exchanges, disk-
>drives, users), they receive no information about themselves. One
Telephone systems receive no inputs about themselves? What about
routing information derived from information the system has about
its own damaged components?
>the system cannot construct an abstract world-model that would
>include itself, a requirement for personal identity.
Here is a simple program to construct an abstract world-model
that includes the machine:
main()
{
printf("I think, therefore I am.\n");
}
Try to convince me that humans do something fundamentally
different here. (seriously)
>If a computer could compile sensory information about itself and
>the world around it into an abstract model of the "world," and
>then use this model to interact with the world, then it would be
>conscious. Further, if it could associate pieces of its model to
>words, and words to a grammar, then it could communicate with
>people and let us know "what it's like to be a computer."
I give as example the relational database program. It collects
sensory information about the world into an abstract model of the
"world" and then uses this model to interact with the world. Is
it therefore conscious? I don't think so. (how self-referential
of me) If fact, I will go further...such a program associates
pieces of its model to words and words into a grammer, and with
the appropriate database, could indeed let us know "what it's
like to be a computer," but I don't think that most people would
call it conscious.
>I would appreciate any reactions.
Ask, and you shall receive.
--
../ray\..
(trent@csvax.caltech.edu)
"The above is someone else's opinion only at great coincidence"
------------------------------
Date: 13 Apr 86 17:25:09 GMT
From: dali.berkeley.edu!regier@ucbvax.berkeley.edu (Terrance P. Regier)
Subject: Re: Computer Dialogue
trent@csvax.caltech.edu writes:
> Here is a simple program to construct an abstract world-model
> that includes the machine:
>
> main()
> {
> printf("I think, therefore I am.\n");
> }
>
> Try to convince me that humans do something fundamentally
> different here. (seriously)
↑↑↑↑↑↑↑↑↑
Descartes' famous assertion was the result of a period of admirably
honest introspection: After allowing himself to doubt the veracity
of his beliefs, senses, etc., he found that some things (well, at
least one thing) CANNOT be doubted. I think, therefore I am. Your
admittedly concise and elegant program fails to capture the integrity
and awareness of self implicit in the statement. It is closer in
spirit to an involuntary burp.
-- Terry
------------------------------
End of AIList Digest
********************
∂14-Apr-86 0330 LAWS@SRI-AI.ARPA AIList Digest V4 #88
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 Apr 86 03:29:54 PST
Date: Sun 13 Apr 1986 20:37-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #88
To: AIList@SRI-AI
AIList Digest Monday, 14 Apr 1986 Volume 4 : Issue 88
Today's Topics:
Seminars - DADO/TREAT: Parallel Execution of Expert Systems (UTexas) &
Inverse Method of Establishing Deducibility (SRI) &
Perspectives, Prototyping, and Procedural Reasoning (CMU) &
Improving Planning Efficiency (Rutgers) &
Anaphora: Events and Actions (UPenn),
Conference - AI Impacts at FAA, Date Change &
Discourse Analysis &
AI and Automatic Control
----------------------------------------------------------------------
Date: Wed, 9 Apr 86 10:06:20 CST
From: Rose M. Herring <roseh@ratliff.CS.UTEXAS.EDU>
Subject: Seminar - DADO/TREAT: Parallel Execution of Expert Systems (UTexas)
University of Texas
Computer Sciences Department
COLLOQUIUM
SPEAKER: Daniel Miranker
Columbia University
TITLE: DADO & TREAT: A Sytem for the Parallel Execution of
Expert Systems
DATE: Thursday, April 10, 1986
PLACE: TAY 3.144
TIME: 11:00-12:00 noon
The development of expert computer programs has moved out
of the research lab and into a quickly developing commercial
field. The development of computer architectures that are better
suited for executing these programs has recently come into the
forefront of computer architecture research. Indeed, a new term,
fifth generation computers, has been coined to describe these ar-
chitectures.
This talk will describe the architecture and software
systems of a recently completed parallel computer, the DADO
machine, designed to accelerate expert systems written in produc-
tion system form. The talk will also describe a new production
system matching algorithm that, although motivated by the algo-
rithmic requirements of parallel computing, has been shown to be
better than the RETE match (the currently accepted best produc-
tion system algorithm), even in a sequential environment.
COFFEE AT 10:30 in TAY 3.128
------------------------------
Date: Thu 10 Apr 86 11:23:55-PST
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - Inverse Method of Establishing Deducibility (SRI)
WHAT IS THE INVERSE METHOD?
Vladimir Lifschitz (VAL@SAIL)
Stanford University
11:00 AM, MONDAY, April 14
SRI International, Building E, Room EJ228 (new conference room)
In 1964, the same year when J. A. Robinson introduced the resolution rule,
a Russian logician and philosopher, Sergey Maslov, published his four-page
paper, "An Inverse Method of Establishing Deducibility in Classical
Predicate Calculus". Maslov's method is based on a major discovery in
proof theory which has remained largely unnoticed by logicians. The method
does not require that the goal formula be written in clausal or even
prenex form, and there may exist a possibility of applying it to
non-classical systems (e.g., modal). Computer programs based on the
inverse method are reported to be comparable, in terms of efficiency, to
those using resolution. The inverse method has been also applied to solving new
special cases of the decision problem for predicate logic, and it can serve as
a uniform approach to solving almost all known solvable cases.
In this talk I explain the idea of the inverse method on a simple example.
Note to visitors: SRI now has stricter security rules and won't allow
people to just walk up to the AIC. If you have any problems being admitted,
please call either me (Amy Lansky -- x4376) or Margaret Olender (x5923).
------------------------------
Date: 10 April 1986 1536-EST
From: Betsy Herk@A.CS.CMU.EDU
Subject: Seminar - Perspectives, Prototyping, and Procedural Reasoning (CMU)
Speaker: David A. Evans, Dept. of Philosophy, CMU
Date: Wednesday, April 23
Time: 11:30 - 1:00
Place: 5409 Wean Hall
Title: Perspectives, prototyping, and procedural reasoning
In the special task of developing a consultation and tutoring
facility for the CADUCEUS expert system, it is necessary to
identify several perspectives over detailed diagnostic information,
which can be organized into meta-level knowledge structures that
reflect explicit procedures, contexts, and pragmatics, associated
with the task of explaining and justifying diagnostic inferences.
Such structures offer concrete interpretations of notions such as
prototypes (taken from cognitive science) and suggest constraints
that can be exploited in controlling discourses and procedural
reasoning.
------------------------------
Date: 10 Apr 86 13:18:38 EST
From: PRASAD@RED.RUTGERS.EDU
Subject: Seminar - Improving Planning Efficiency (Rutgers)
Machine Learning Colloquium
REAPPR:
Improving planning efficiency via local expertise and reformulation
Bresina, J.L., Marsella, S.C., and Schmidt, C.F.
Rutgers University
11 AM, April 22, Tuesday
#423, Hill Center
Abstract
We discuss planning within the problem reduction paradigm. Within this
paradigm, a key issue is handling subproblem interactions. We point out the
advantages of problem reduction over goal reduction (which characterizes most
previous planning systems). We introduce an implemented planning system -
REAPPR - which extends the problem reduction paradigm to capture and
efficiently utilize expert planning knowledge. The features of REAPPR
include: (i) potential parallelism, (ii) local control information, (iii)
flexible problem reduction, and (iv) reformulations.
------------------------------
Date: Fri, 11 Apr 86 12:19 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Anaphora: Events and Actions (UPenn)
Forwarded From: Ethel Schuster <Ethel@UPenn> on Thu 10 Apr 1986 at 20:22
TOWARDS A COMPUTATIONAL MODEL OF ANAPHORA IN DISCOURSE:
REFERENCE TO EVENTS AND ACTIONS
Ethel Schuster
Abstract
When people talk or write, they refer to things, objects, events, actions,
facts and/or states that have been mentioned before. Such context-dependent
reference is called anaphora. In general, linguists and researchers working in
artificial intelligence have looked at the problem of anaphora interpretation
as that one of finding the correct antecedent for an anaphor--that is, the
previous words or phrases to which the anaphor is linked. Lately, people
working in the area of anaphora have suggested that in order for anaphors to be
interpreted correctly, they must be interpreted by reference to entities evoked
by the previous discourse rather than in terms of their antecedents.
This work describes the process of dealing with anaphoric language when the
reference is to events and actions. It involves four issues: (i) what aspects
of the discourse give evidence of the events and actions the speaker is talking
about, (ii) how actions and events are represented in the listener's discourse
model, (iii) how to identify the set of events and actions as possible choices,
and (iv) how to obtain the speaker's intended referent to an action or event
from a set of possible choices. Anaphoric forms that are used to refer to
actions and events include sentential-it, sentential-that pronominalizations as
well as do it, do that, and do this forms. Their interpretations can be many
and because of that, they cannot be understood only on linguistic grounds but
on models of the discourse. So, I will concentrate on developing the four
previously mentioned issues along with other mechanisms that will provide us
with better tools for the successful interpretation of anaphoric referents to
actions and events in discourse.
April 16, 1986
11 am
Moore 129 (Faculty Lounge)
Advisor: Bonnie Webber
Committee: Tim Finin, Chair
Aravind Joshi
Ellen Prince (Linguistics Dpt.)
Tony Kroch (Linguistics Dpt.)
Candy Sidner (BBN)
------------------------------
Date: 11 Apr 86 23:55:32 GMT
From: hplabs!sdcrdcf!burdvax!blenko@ucbvax.berkeley.edu (Tom Blenko)
Subject: Conference - AI Impacts at FAA, Date Change
ARTIFICIAL INTELLIGENCE IMPACTS WORKSHOP
presented by
AMERICAN COMPUTER TECHNOLOGIES, INC.
>>> June 11-13, 1986 <<<-- NOTE change of date
FAA Technical Center
Atlantic City Airport, New Jersey
This is a mildly-technical workshop for marketing, planning and
manufacturing professionals who are interested in artificial
intelligence. Workshop emphasizes marketing data, competitive
analyses, planning information, financials, opportunities and
contraints, etc., from a world-wide survey of businesses and
governments involved in AI.
Information can be obtained from:
American Computer Technologies, Inc.
237 Lancaster Avenue, Suite 255
Devon, PA 19333
Attn: Ms. Carol Ward
(215) 687-4148,
and/or Ms. Pat Watts of the Federal Aviation Administration Technical
Center:
(609) 484-6646.
(This information is being posted for a friend: please respond to the
address given above).
------------------------------
Date: 10 Apr 86 16:55:09 GMT
From: decvax!mcnc!akgua!ganehd!anv@ucbvax.berkeley.edu (Andre Vellino)
Subject: Conference - Discourse Analysis
First Ad Hoc Conference
on Discourse Analysis
April 24-25, 1986
138 Tate Hall
University of Georgia
Athens, Georgia
Thursday, April 24
9 a.m. Rainer Bauerle (University of Tubingen)
"Nominalizations, Event Anaphora,
and Order of Events in a DRT-framework"
10.30 a.m. Coffee Break
11 a.m. Nirit Kadmon (University of Massachusetts, Amherst)
"Maximal Collections, Specificity,
and Discourse Anaphora"
12.30 p.m. Lunch Break
2 p.m. Hans Kamp (University of Texas, Austin)
"Plural Anaphora and Plural Determiners"
Friday, April 25
9 a.m. Craige Roberts (University of Massachusetts, Amherst)
"Modal Subordination and Pronominal Anaphora
in Discourse"
10.30 a.m. Coffee Break
11 a.m. Michael Covington (University of Georgia, Athens)
"Modelling Implicature with Defeasible Logic"
12.30 p.m. Lunch Break
2 p.m. Barbaree Partee (University of Massachusetts, Amherst)
"Nominal and Temporal Anaphora"
Advanced Computational Methods Center
University of Georgia
Athens, Georgia 30602
For further information contact Marvin Belzer (404) 542-5110
------------------------------
Date: 11 Apr 1986 19:22:05 EST
From: ALSPACH@USC-ISI.ARPA
Subject: Conference - AI and Automatic Control
Dr. Andrews
National Aeronautics & Space Administration
Ames Research Center
San Jose, CA
Dear Dr. Andrews:
Per your note to AI-LIST on April 1, regarding the synergism between
the fields of artificial intelligence and automatic control, I would
like to bring your attention to the American Control Conference to be
held in Seattle from June 18-20 this year. The American Control
Conference is sponsored by the American Automatic Control Council,
which is a council consisting of member organizations which include
the AIAA, AICHE, ASME, IEEE, ISA, and SCS. The ACC is the U.S.
representative to IFAC (the International Federation of Automatic
Control). In addition, other engineering societies, such as
Automation Engineers, participate. This is the largest conference on
control held in the United States, and is multidisciplinary. It has
been held for a number of years.
Looking at this year's program, it is clear that your idea of exploring
the common ground between control and artificial intelligence is
already seriously in progress. Out of 68 sessions, there are seven
sessions whose major themes are artificial intelligence and control,
or robotics and control.
First, on Wednesday A.M., there is a session on Artificial
Intelligence in Process Control. The Chairman is R. Moore of LISP
Machines, Inc., and a number of national and international experts are
talking about this very interesting topic. In parallel with this
session on Wednesday A.M., there is a session entitled Robotics that
explores many aspects of robotics control. The Chairman of this
session is Jason Speyer from the University of Texas at Austin, and it
will be co-chaired by M. Railey from the University of Akron.
On Wednesday P.M., there is a session entitled Artificial Intelligence
Applications in Sensor Fusion and Command and Control. This session
is chaired by Dr. S. Brodsky, Sperry Corporation, and addresses some
very interesting work in the area of artificial intelligence
applications to sensor fusion and command and control. Typical papers
from this session include J. Flynn of DARPA on "Carrier Based Threat
Assessment", J. Delaney of Stanford talking on "Multisensor Report
Integration Using Blackboards", and M. Grover and M. Stachnick of
Advanced Decision Systems discussing "Overlooked and Unconventional AI
Techniques for Command and Control". A number of other very
interesting papers are in this session.
On Thursday A.M., there is a session on 4D Aircraft Guidance and
Expert Traffic Management, which is chaired by A. Chakravarty of
Boeing Commerical Airplane Company and co-chaired by R. Schwab, also
of Boeing. An exemplar paper in this session is "Time-Based Air
Traffic Management Using Expert Systems" by L. Tobias and J. Scoggins
of NASA Ames Research Center. Running in parallel on Thursday A.M.,
is a specialist session on Direct Drive Robot Arms. This is chaired
by J. Slotine of Massachusetts Institute of Technology and co-chaired
by H. Asada of Kyoto University, Japan.
Another general session on Artificial Intelligence is to be held on
Thursday P.M., chaired by J. Birdwell from the University of Tennessee
and co-chaired by G. Allgood, Oak Ridge National Laboratory. A number
of excellent papers include: "Domains of Artificial Intelligence
Relevant to Systems", by J. Birdwell and J. Crockett, University of
Tennessee, and J. Gabriel of Argonne National Laboratory; "Knowledge
Representation by Scripts in an Expert Interface" by J. Larsson and P.
Persson of Lund Institute of Technology; and "An Expert System to
Control a Fusion Energy Experiment" by R. Johnson, et al., from
Lawrence Livermore Laboratories.
On Friday A.M., there is a session on Aerospace and Robotics
Applications of Nonlinear Control, chaired by F. Fadali, University of
Nevada-Reno and co-chaired by T. Dwyer, University of Illinois. In
parallel on Friday A.M., there is a session on Robot Tracking Control
chaired by George Saridis of Rensselaer Polytechnic Institute.
On Friday P.M., there is a session on Multitarget Tracking and Data
Association chaired by C. Chong, Advanced Information & Decision
Systems, and co-chaired by M. Shensa, Naval Ocean Systems Center.
This discusses an area that is ripe for artificial intelligence
applications and, for example, includes a paper entitled "An Expert
System for Surveillance Automation" by R. Mucci of BBN Laboratories.
Also in parallel with this Friday P.M. session is one on Robot Control
chaired by J. Garbini, University of Washington, and co-chaired by C.
Nachtigal of Kistler Morse Company.
In addition to these sessions, there are a number of papers on
artificial intelligence, expert systems and robotics applications
scattered throughout a number of other sessions in the program.
Also, of interest to people who are interested in the AI List
information, there is a one-day tutorial workshop on Monday, June 16,
preceding the conference, entitled "Intelligent Control System Design
and Analysis". The purpose of this workshop is to introduce control
systems engineers and engineering managers to the possibility of using
intelligent systems during the design and analysis of control systems.
Participants will learn the techniques for building expert systems and
will see examples of their use in control system design. This
tutorial workshop will be taught by Guy Beale of Vanderbilt University
and Charles Buenzli of Gilbarco-Exxon. On Tuesday, June 17, another
tutorial workshop will be taught by Roger Brockett of Harvard
University and Robert M. Goor of General Motors Research Laboratory.
The topic of this workshop will be "Modeling and Control of Robotic
Manipulators".
The General Chairman for the conference is Dr. Ed Stear, who is
Associate Dean of Electrical Engineering at the University of
Washington and Head of the Washington Technology Center. It may also
be of interest to this community that one of the plenary speakers is
Dr. Robert Rankine, Brigadier General, U.S. Air Force and Head of Air
Force SDI activities. He will discuss some of the control challenges
associated with the SDI Program and with the proposed new hypersonic
trans-atmospheric vehicles.
All in all, for someone interested in the merging of the fields of
artificial intelligence, expert systems and automatic control, this is
an excellent conference to attend. There is also a great social
program planned for the evenings to allow informal discussions among
the attendees. Also, Expo '86 is only a few miles up the road in
Vancouver, British Columbia, for those interested in attending this
activity before or after the conference.
To obtain information regarding registration, please contact the
office of Dagfinn Gangsaas, BMAC, P.O. Box 3707, MS 33-12, Seattle, WA
98124, (206) 241-4348. Preliminary programs may be obtained by
sending a request to me via Arpanet, c/o ALSPACH (at) USC-ISI or
mailing a request to D. L. Alspach, ORINCON Corporation, 3366 N.
Torrey Pines Ct., Suite 320, La Jolla, CA 92037.
Sincerely,
Daniel L. Alspach
Program Chairman
1986 American Control Conference
BBN Laboratories.
------------------------------
End of AIList Digest
********************
∂14-Apr-86 2331 LAWS@SRI-AI.ARPA AIList Digest V4 #89
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 Apr 86 23:31:35 PST
Date: Mon 14 Apr 1986 20:51-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #89
To: AIList@SRI-AI
AIList Digest Tuesday, 15 Apr 1986 Volume 4 : Issue 89
Today's Topics:
Bibliography - Recent Articles #4
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Articles #4
%A Richard L. Wexelblat
%T Editorial
%J SIGPLAN Notices
%V 21
%N 3
%D MAR 1986
%P 1
%K AI03 AA17 Queens Problem H03 ADA
%X SIGPLAN is having a contest to determine the
best solution for the N<=8 Queens problem using
concurrency in ADA in a substantive manner.
Deadline for submissions is June, 1986.
%A Scott Mace
%T Ansa Upgrades Paradox, Drops Copy Protection
%J InfoWorld
%V 8
%N 11
%D MAR 17, 1986
%P 3
%K Paradox AA09 H01
%X Ansa software announced upgrades to its system and has dropped
copy protection
%T Resources
%J InfoWorld
%V 8
%N 11
%D MAR 17, 1986
%P 17
%K AI01 H01 Cahners Publishing Company Users Group Expert Systems Strategies
%X The New York IBM PC Users Group has announced a special interest
group for Expert Systems. The meetings are held March 25. For more info
contact NYPC, Suite 614, 30 Wall Street, 10005 (212) 533 NYPC. Cahners
Publishing Company produces a newsletter called Expert Systems Strategies.
Charter rate is $207, regular rate is $247. Address is Cahners Publishing
Co. P. O. Box 59, New Town Branch Boston, MA 02258 (617) 964 3030
%A Cornelius Willis
%T The Problems with AI
%J InfoWorld
%V 8
%N 11
%D MAR 17, 1986
%P 20
%K Insight 1 Level Five Research AI01 T03 H02 AT14 AT12
%X The director of marketing for Level Five Research writes that the
reason that artificial intelligence has not been embraced by corporate
MIS directors is that the charges for Lisp machines and knowledge engineers
are way too high. He also claims that his product, Insight 1, is
"the most widely used knowledge engineering tool in the world."
%A Peggy Watt
%T Ansa Move Woos Corporate Users
%J ComputerWorld
%D MAR 17, 1986
%V 20
%N 11
%P 1+
%K H01 AA09 Paradox Ashton-Tate AT04
%X Ansa Software's Paradox has been approved by only about two dozen
large-account evaluators. In November and December, they sold 1,449
copies against 13,156 copies of Ashton Tate's DBASE products. In January,
it was 813 copies of Paradox against 6,154 copies of DBASE. Ansa
is investigating the micro-to-mainframe data base file exchange.
%A Douglas Barney
%T AI-Based Financial Systems Allocates Assets Based on Goals
%J ComputerWorld
%D MAR 17, 1986
%V 20
%N 11
%P 6
%K H02 AI01 AA06 First Financial Planner Services Plan Power Xerox personal
planning
%X discussion of First Financial Planner Services, Plan Power which
is an expert system that performs personal
financial planning. It has 6000 rules.
%A Elisabeth Horwitt
%T AI Integration Gets a Shot in the ARm as Vendors Link Products
%J ComputerWorld
%D MAR 17, 1986
%V 20
%N 11
%P 47+
%K Harvey Newquist Symbolics H03 SNA Texas Instruments Explorers Gould
LMI SUN Apollo AT16
%X Symbolics has announced a link to IBM mainframes via SNA.
Flavors Technology brought out a high speed bus to bus link between
Lisp Machine, Inc. machines or Texas Instruments Explorer's.
and Gould, Inc. superminicomputers. It costs $36,000.
Texas Instruments plans to integrate their Explorer with SUN and
Apollo.
%A Eric L. Schwartz
%A Bjorn Merker
%T Computer-Aided Neuroanatomy: Differential Geometry of Cortical
Surfaces and an Optimal Flattening Algorithm
%J IEEE Computer Graphics and Applications
%D MAR 1986
%V 6
%N 3
%P 36-44
%K AA10 AI08
%X describes the mapping of the visual field on the visual cortex
of the monkey
%T Apollo, TI to TIE Network, Workstation
%J Electronic News
%V 32
%N 1593
%D MAR 17, 1986
%P 18+
%K SUN LMI Flavor Common LISP Compact Lisp Machine H02 AT02 AT16
%X [Much of the material in this article was reported
recently elsewhere in AILIST; only new stuff is in this abstract]
Apollo will be selling a $3,500 Common Lisp. The link between
Apollo and TI will be made using Apollo's Open Systems Tookit
and TI's Flavor package. Apollo also hopes to use TI's single
chip LISP machine. LMI's marketing director said that it is
has always been the position of his company that Lisp machines
and LISP cannot survive alone. He predicted that alignment of
TI, SUN and Apollo will not affect LMI. Furthermore, he predicts
that the single chip LISP machine development effort at TI
will take at least 12 months.
%A Richard H. McSwain
%A Robert W. Goutld
%T Taking the Fatigue Out of Fracture Surface Analysis
%J Metal Progress
%D MAR 1986
%V 129
%N 4
%K AA05 Metallurgy Failure Analysis AI06 striation Fourier
Transform
%X Describes use of the Fourier transform method to analyze
the fracture surface of a material failing from fatigue.
[When a material is repeatedly subjected to changes in
stress, it may fail from fatigue. This is even true when
the maximum load is well below the limit which would cause
failure if it was applied in a steady state condition. When
this happens, a characteristic striation appears on the
fracture surface. This can be viewed with Scanning
Electron Microscopy or even with the naked eye or magnifying
glass. LEFF]
%A Barry Meier
%T Robot Subs Begin to Surface as Versatile Exploration
Tools
%J Wall Street Journal
%D MAR 7, 1986
%V 78
%N 46
%P 19
%K Deep Ocean Engineering Company
International Submarine Engineering Ltd.
AA03 AI07 AA18 AA19 GA04
%X Describes some uses and research therein for robot
submarines:
Canadian Oceanographers will use one to hunt for oil
below the icecap. It will be dropped through
a hole in the ice 1000 miles from the North Pole.
It will then navigate in a grid like pattern in
a ten square mile area mapping the bottoms. Sonar
will help the sub avoid icebergs. The thing is being
build by International Submarine Engineering Ltd.
.sp 1
R&D exists in applications
of submarines to prospecting, repair of oil installations,
perform rescue and recovery missions, and engage in
spying.
Work is done on developing sensors based on sound,
AI systems to help robots react to currents and fiber
optics for exchange of data with mother ships.
Deep Ocean Engineering Company has developed a system
of sensors to detect the weight and composition of
objects under water. They are using tones to inform
the operator of what the robot has in its arms.
They are also perfecting AI to the point that submarines
can be free-swimming. NASA is funding some of this work
since they hope to apply the results to space travel.
%T Advertisement
%J BYTE
%D APR 1986
%P 284
%V 11
%N 4
%K Solution Systems TransLisp T01 H01 AT01
%X Lisp for IBM PC for only $75.00 It is a 230+ function
subset of Common Lisp and has MSDOS interface and graphics.
(Solution Systems also sells the BRIEF editor
%T Star Wars Divides A Campus
%J BusinessWeek
%D March 10, 1986
%P 82-86
%N 2936
%K Carl Hewitt MIT
%X Discusses reactions of MIT people to SDI funding
Carl Hewitt has decided to apply for SDI funding.
The AI Lab at MIT received 55 percent of its 8 million
dollar budget from the defense department.
%A Michael Lesk
%T Writing to be Searched: A Workshop on Document Generation Principles
%J SIGIR Forum
%V 19
%N 1-4
%D WINTER 1986
%P 9-14
%K Cucumber Information Knowledge Systems AI02 A08 AA14
%X "It is now possible to design full-text retrieval systems that
accept conventional docuements and questions in natural English, and then retrie
ve
documents ofr passages from documents that probably answer the questions."
Cucumber Information
Systems and Knowledge Systems, Inc. sell such systems.
A high degree of grammatical variation does not seem important to produce
natural effects in short paragraphs (as evidenced by Karen Kukich's
stock market report generator)" "Syntax is much less important
for retrieval than semantics; you need to know what the words mean more
than you need to know their relationship." "Editing manuals to make
them suitable for machine translation, requiring simple translation, has
turned out to make them better in the original language as well."
%A Susanne M. Humphrey
%T Automated Classification and Retrieval Program: Indexing Aid Project
%J SIGIR Forum
%V 19
%N 1-4
%D WINTER 1986
%P 16-17
%K AA14 AI02 AA01
%X Lister Hill Center of the National Library of Medicine is
developing this system to generate indices consistent with
those normally used by MEDLINE. They are using a frame based system.
%A Frank Tansey
%T Guru's Power Cuts Out the Competition
%J Infoworld
%V 8
%N 12
%D MAR 24, 1986
%P 14
%K AI01 AA09 AA06 university administration residency T03 H01 AT17
%X This is a review of GURU, an expert system tool that interfaces
with MDBS's Knowledge Man. It supports up to 3000 rules, forward
and backward chaining, inexact reasoning. The system also includes
a text processor, graphics, spreadsheet, graphics and telecommunications.
The system received a rating of 5.8 out of 10 with very good
for performance and ease of use, satisfactory for documentation and value,
It takes 1700 pages of documentation to describe the system.
.sp 1
As of much interest as the review itself are the two systems that were
two expert systems developed using GURU described in this review.
The first was a system to assist in determining residency status
of students
for the California Universities for the purpose of determining
tuition. The final expert system was judging cases with the
experience of a person with six months to one year in
evalulating such matters. The system was already able to
impress people in the field with only fifty rules.
They also wrote an expert system to do personal financial planning.
This took 300 rules and embodied the entire expertise of the
person writing the software.
.sp 1
[I read elsewhere that MDBS has sold $6,000,000 of these packages
since they came out. They cost $3,000 each. MDBS is known
for Knowledge Man, probably the most powerful relational
data base for micros. Keep in mind that InfoWorld tends
to downgrade systems if they weren't written so as to be used
by people lacking knowledge or aptitude for computers and
thus most readers of AILIST would have a higher opinion of the
package than 5.8 out of 10. LEFF ]
%T Chairman Resigns From Automatix
%J Electronic News
%D Mar 24, 2986
%V 32
%N 1594
%K AT16 AT11 AI07
%X Philippe Villers resigned as chairman of robotics maker
Automatix. Automatix has yet to turn a profit and lost $5,594,000
in 1985 and $14,193,000 in 1984.
%A Michael Bucken
%T Symbolics Starts VAR Program for 36-BIT Processing Systems
%J Electronic News
%D Mar 24, 2986
%V 32
%N 1594
%K AA04 H02 AT16
%X Symbolics has signed its first VAR contract with ICAD which
is developing an engineering design software package. 40 percent
of Symbolics customers are using the system for applications other than
artificial intelligence. The system has sold about 2000 processors.
%A Criag Stedman
%T Management Seeking GCA Robotics Group
%J Electronic News
%D Mar 24, 2986
%V 32
%N 1594
%K Industrial Systems Group AT16 AI07
%X The management of the robotics division of GCA Corporation
is trying to arrange a leveraged buyout. The division has lost
10 to 15 million dollars on sales of about $35 million.
%A Tony Baer
%T Finding the Titanic
%J Mechanical Engineering
%V 108
%N 3
%D MAR 1986
%K Jason Angus control chattering ARGO submersible salvage
underwater AI06 AI07
%X One of the problems in underwater vision is backscattering
from the light source of suspended particles. A good way of
fighting this problem is to mount the light source away from
the camera. The new lighting system on the Angus has yielded readable
images of areas about as large as a city block. A system called
Jason is being developed that will mount in the ARGO submersible.
This system will be self-propelled and have its own manipulator arm.
However, it is NOT going to need artificial intelligence [Emphasis mine,
Leff]
%A J. Houseley
%T Getting a Grip on Sensors
%J IEEE Spectrum
%V 23
%N 4
%D APR 1986
%P 8
%K tactile sensors AI07 AT12 AT13
%X This is a comment by an article by Paolo Darlo and Danilo De Rossi
of August 1985 on the subject of using tactile sensors in gripping
objects in robotics. In a human being picking up an egg, the
human being would apply enough force to prevent the weight of
the egg from deflecting it. In gripping a hammer, friction between
the hammer is used. There is a comment on the role of learning in
applying the right amount of force to adjust for the change when
the hammer impacts the nail. (There is also a response by
the author.
%T Advertisement
%J Byte
%D MAR 1986
%V 11
%N 3
%K AT03 H01 T03 AA18 AT01 Thunderstone Corporation Clarity Software
Comprehension Logic-Line
%X Add for Thunderstone Corporation's Logic Line 1 ($250), Logic-Line
2 ($400.00) and Comprehension ($75.00) for the IBM PC It is not clear
from the advertisement what LOGIC-LINE1 and LOGIC-LINE2 actually do.
Comprehension is supposed to enable a person to diagnose their
weakness in a given discipline. Some quotes from this advertisement:
"Our success has effectively stompted the mortal spit out of the brain
damaged geeks whose rancid cells have been polluting the gene pool of
legitimate AI professionals." "LOGIC-LINE1, a major breakthrough in
sub-cognitive mathematics, distills the DNA/RNA like analog to any
writer's thought processes. It allows you to search any textbase for
actual concepts and inference patterns unique to that writer. In
other words, even though Einstein may never have had a single thought
about ecology, you can apply his thinking patterns to solving
ecological problems!" "And at its highest level? You just might use
Thunderstone tools to save the free world, again. That's right:
Again! LOGIC-LINE 2 began with the mathematics of possibilistic
analysis and recursion (developed by men like Alan Turing and Norbert
Weiner) that directly led the Wellington College team to breaking the
German naval codes in World War II."
%A Melissa Calvo
%T Japanese Firms Granted License by Compuserve
%J InfoWorld
%P 14
%V 8
%N 8
%D FEB 24, 1986
%K Network Information Forum Nissho Iwai Corporation machine translation AI02
%X Fujitsu announced an English to Japanese translator which works
at 60,000 words per hour.
Compuserve and Network Information Forum plan a database exchange which
might use this translation software.
------------------------------
End of AIList Digest
********************
∂15-Apr-86 0257 LAWS@SRI-AI.ARPA AIList Digest V4 #90
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Apr 86 02:56:52 PST
Date: Mon 14 Apr 1986 20:55-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #90
To: AIList@SRI-AI
AIList Digest Tuesday, 15 Apr 1986 Volume 4 : Issue 90
Today's Topics:
Bibliography - Recent Articles #5
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Articles #5
%A M. Celenk
%A S. H. Smith
%T A New Systematic Method for Color Image Analysis
%R Tech. Rep EE 8509
%D DEC 1985
%I Stevens Institute of Technology Electrical Engineering and Computer
Science Departments
%K AI06
%A S. A. Friedberg
%T Symmetry Evaluators
%R TR134 (revised)
%D JAN 1986
%I The University of Rochester Computer Science Department
%K AI06 Hough transform
%X $1.25 24 pages
%A D. H. Ballard
%A P. J. Hayes
%T Parallel Logical Inference and Energy Minimization
%R TR142
%D DEC 1985
%I The University of Rochester Computer Science Department
%K connectionist H03 AI08
%X $1.50 34 pages
%A J. A. Feldman
%T Parallelism in High Level Vision
%R TR146
%D JAN 1985
%I The University of Rochester Computer Science Department
%K H03 AI08 AI06
%X 33 pages $1.50
%A J. Tenenberg
%T Reasoning Using Exclusion: an Extension of Clausal Form
%R TR147
%D JAN 1986
%I The University of Rochester Computer Science Department
%K common sense reasoning AI10 AI11
%X 25 pages $1.25
%A D. H. Ballard
%T Form Perception as Transformation
%R TR148
%D JAN 1986
%I The University of Rochester Computer Science Department
%K AI06 AI07
%X 34 pages $1.50
%A A. Basu
%A C. M. Brown
%T Algorithms and Hardware for Efficient Image Smoothing
%R TR149
%D DEC 1984
%I The University of Rochester Computer Science Department
%K AI06 H03 median mean filters
%X 20 pages $1.00
%A B. Sarachan
%T Experiments in Rotational Egomotion Calculation
%R TR152
%D FEB 1985
%I The University of Rochester Computer Science Department
%K AI06
%X 26 pages $1.25 [Seems to be a paper for a robot to determine if it got
rotated. LEFF]
%A G. W. Cottrell
%T A Connectionist Approach to Word Sense Disambiguation
%R TR154
%D MAY 1985 (PHD Thesis)
%I The University of Rochester Computer Science Department
%K AI02 AI08
%X 242 pages $7.25
%A J. A. Feldman
%T Energy and the Behavior of Connection Models
%R TR155
%D NOV 1985
%I The University of Rochester Computer Science Department
%K H03 AI12
%X 41 pages, $1.75
%A D. Sher
%T Template Matching on Parallel Architectures
%R TR156
%D JUL 1985
%I The University of Rochester Computer Science Department
%K H03 AI06 Fourier Transform WARP Butterfly
%X 28 pages, $1.25
%A A. Bandopadhay
%A J. Aloimonos
%T Perception of Rigid Motion from Spatio-Temporal Derivatives of Optical Flow
%R TR157
%D MAR 1985
%I The University of Rochester Computer Science Department
%K AI06
%X 18 pages $1.00 [Seems to be another paper on getting a robot to tell
whether somebody rotated it or not LEFF]
%A J. Aloimonos
%A A. Bandopadhay
%T Perception of Structures from Motion: Lower Bound Results
%R TR158
%D MAR 1985
%I The University of Rochester Computer Science Department
%K AI06
%X 16 pages $1.00
%A J. Aloimonos
%T One Eye Suffices: a Computational Model of Monocular Robot Depth Perception
%R TR160
%D DEC 1984
%I The University of Rochester Computer Science Department
%K AI06 optical flow depth perception orthographic perspective projection
%X 16 pages $1.00
%A J. Aloimonos
%A P. B. Chou
%T Detection of Surface Orientation and Motion from Texture: 1. The
Case of Planes
%R TR161
%I The University of Rochester Computer Science Department
%K AI06 Gibson
%X 21 pages $1.25
%A Henry A. Kautz
%T Toward a Theory of Plan Recognition
%R TR162
%I The University of Rochester Computer Science Department
%K AI09
%D JUL 1985
%X 15 pages $1.00
%A L. Shastri
%T Evidential Reasoning in Semantic Networks: A Formal Theory and its
Parallel Implementation
%R TR166
%I The University of Rochester Computer Science Department
%K H03 O04
%D SEP 1985
%X 256 pages $7.50
%A D. H. Ballard
%A P. Gardner
%A M. Srinivas
%T Graph Problems and Connection Architectures
%I The University of Rochester Computer Science Department
%R TR167
%K H03 AI12
%D DEC 1985
%X 24 pages $1.25
%A A. Bandopadhay
%T Constraints on the Computation of Rigid Motion Parameters from
Retial Displacements
%I The University of Rochester Computer Science Department
%R TR168
%K AI07 AI06
%D OCT 1985
%X 77 pages, $2.75 [Seems to be another paper on getting a robot to tell
whether somebody rotated it or not LEFF]
%A A. Bandopadhay
%A J. Aloimonos
%T Perception of Structure and Motion of Rigid Objects
%D DEC 1985
%I The University of Rochester Computer Science Department
%R TR169
%K AI07 AI06
%X 55 pages $2.00 [Seems to be another paper on getting a robot to tell
whether somebody rotated it or not LEFF]
%A D. J. Litman
%T Plan Recognition and Discourse Analysis: An Integrated Approach for
Understanding Dialogues
%D 1985
%R TR170
%I The University of Rochester Computer Science Department
%K AI02 AI09
%X 197 pages $6.00
%A J. A. Feldman
%A D. H. Ballard
%A C. M. Brown
%A G. S. Drell
%T Rochester Connectionist Papers 1979-85
%D DEC 1985
%R TR172
%I The University of Rochester Computer Science Department
%K AI12 AT21
%X no charge
%A N. Murray
%A E. Rosenthal
%T On Deleting Links in Semantic Graphs
%R TR 85-4
%I State University of New York at Albany, Computer Science Department
%K predicate calculus path resolution AI11
%A S. Chaiken
%A N. Murray
%A E. Rosenthal
%T An Application of $P sub 4$ Free Free Graphs in Theorem Proving
%R TR85-8
%I State University of New York at Albany, Computer Science Department
%K AI11
%X We describe the application of graphs that have no induced $P sub 4$
(4 vertex path) subgraphs to automatic theorem proving. The semantics of
a propositional formula are expressed in terms of the maximal cliques in
a $P sub 4$ free graph rather than by truth assignments. Arc sets of s-t
paths in a series parallel network provide an equivalent formulation.
We provide combinatorial foundations for Murray and Rosenthal's work
on path resolution (e. g. TR84-1, TR 84-12 and TR 85-4) For
any graph G, a c-block (resp d-block) is an induced subgraph H in G such
that for all maximal cliques (resp maximal stable sets) C in G, C $int$
H is $PHI$ or is a maximal clique (resp. maximal stable set) in H. A
full block is botha c-block and a d-block. Blocks are generalizations of
substitution subgraphs which occur in Lovasz's work on perfect graphs.
Theorem: If full block H is $P sub 4$-free then H must arise by
substitution. Other properties in these blocks in arbitrary graphs and
in $P sub 4$-free graphs are given. These constructs are instrumental
in the development of several closely related inference rules collectively
referred to as path resolution. Finally we show how semantics of $P sub 4$
graphs are
generalized to blocking systems by Minty's painting lemma. This suggests
possible generalization of path resolution to other combinatorial structures.
%A M. Balaban
%T Western Tonal Music - A New Domain for AI Research
%R TR 85-10
%I State University of New York at Albany, Computer Science Department
%K AI02 AA25
%A M. Balaban
%T Knowledge Representation and Inferencing in a Musical Database
%R TR 85-11
%I State University of New York at Albany, Computer Science Department
%K frames AA25 AA14 T02
%A M. Balaban
%T The Generalized Concept Formalism - A Frame and Logic Based
Representation Model
%R TR 85-20
%I State University of New York at Albany, Computer Science Department
%K AA25 T02
%A Mira Balaban
%T Foundations for Artificial Intelligence Research of Western Tonal Music
%R TR 85-22
%I State University of New York at Albany, Computer Science Department
%K AA25
%A M. Balaban
%T CSM: An AI Approach to the Study of Western Tonal Music
%R TR 85-24
%I State University of New York at Albany, Computer Science Department
%K AA25
%A H. B. Hunt
%A R. E. Stearns
%T Distributive Lattices and the Complexity of Logics and Probability
%R TR 85-28
%I State University of New York at Albany, Computer Science Department
%K AI11 O04
%X Relationships between number of repetitions of variables in formulas
and complexity of decision problems for the formulas.
Applications to logic and probability:
1) Any reasonable propositional calculus with a reasonable implication
operator has a coNP-hard logical Validy problem. This is true for very
simple formulas involving or, and and a single occurrence of the implication
operator
2) The set of theorems of the propositional calculus of classical
implicative logic is coNP complete
3. Computing the probabilities of a joint event and a conditional event becomes
"hard" almost immediately when the events E1 and E2 are not statistically
independent
%A H. B. Hunt
%A R. E. Stearns
%T Monotone Boolean Formulas, Distributive Lattices, and the Complexities
of Logics, Algebraic Structures, and Computation Structures (Preliminary Report)
%R TR85-29
%I State University of New York at Albany, Computer Science Department
%K AI11 O04
%A Andrew Laine
%A Seymour V. Pollack
%T The Enhanced Wudma Image Processing
%R WUCS-85-1
%I Department of Computer Science, Washington University
%C St. Louis, Missouri
%K AI06
%A S. E. Elnahas
%A R. G. Jost
%A J. R. Cox
%A R. L. Hill
%T Transmission Progressive of Digital Diagnostic Images
%R WUCS-85-8
%I Department of Computer Science, Washington University
%C St. Louis, Missouri
%K AI06 AA01
%X Progressive transmission of digital pictures permits the receiver
to construct an approximate picture first, then gradually improve the quality
of reconstruction.
%A James R. Slagle
%A JOhn M. Long
%A Michael R. Wick
%A John P. Matts
%A Arthur S. Leon
%T Expert Systems in Medical Studies- A New Twist
%R TR 86-3
%I University of Minessota, Department of Computer Science
%D 1986
%K AA01 AI01
%A Robert M. Herndon, Jr.
%A Valdis A. Berzins
%T An Interpretive Technique for Evaluating Functional Attribute
Grammars
%R TR 86-5
%I University of Minessota, Department of Computer Science
%R 1986
%A Robert M. Herndon, Jr.
%A Valdis A. Berzins
%T A Method for the Construction of Dynamic, Lazy Evaluators for
Functional Attribute Grammars
%R 86-6
%I University of Minessota, Department of Computer Science
%R 1986
%A J. Schwartz
%A M. Sharir
%T Efficient Motion Planning Algorithms in Environments of
Bounded Local Complexity
%R 164
%I New York University, Courant Institute of Mathematical Sciences,
Department of Computer Science
%K AI07
%D JUN 1985
%A J. Schwartz
%A M. Sharir
%T Identification of Partially Obscured Objects in Two Dimensions
by Matching of Noisy 'Characteristic Curves'
%R 165
%I New York University, Courant Institute of Mathematical Sciences,
Department of Computer Science
%D JUN 1985
%K AI06
%A G. Landau
%A U. Vishkin
%T Efficient String Matching with k Mismatches
%R 167
%I New York University, Courant Institute of Mathematical Sciences,
Department of Computer Science
%D JUN 1985
%X Give a text of length n, a pattern of length m and an integer k,
we present an algorithm for finding all occurrences of the patterns in
the text, with at most k mismatches running in O(k(mlogm + n)
%A G. Landau
%A U. Vishkin
%T An Efficient String Matching Algorithm with k Differences for
Nucleotide and Amino Acid Sequences
%R 168
%I New York University, Courant Institute of Mathematical Sciences,
Department of Computer Science
%D JUN 1985
%X Algorithm to allow for optimal alignment of one sequence, the
pattern of length m, with another longer sequence the text, of
length n. These algorithms allow mismatches, deletions
and insertions. If k is the maximum number of differences,
then the time is O(k sup 2 n).
%A R. Hummel
%A A. Rojer
%T Connected Component Labeling in Image Processing with
MIMD Architectures
%R 173
%I New York University, Courant Institute of Mathematical Sciences,
Department of Computer Science
%D SEP 1985
%K AI06 H03
%A S. Zucker
%A R. Hummel
%T Receptive Fields and the Representation of Visual Information
%R 176
%I New York University, Courant Institute of Mathematical Sciences,
Department of Computer Science
%D SEP 1985
%K AI06 AI08 Gaussian retina
%X Hypothesis that the receptive fields of the retina provide
a suitable method for transmitting the image over the optic nerve
which is a limited bandwidth channel.
%A M. Landy
%A R. Hummel
%T A Brief Survey of Knowledge Aggregation Methods
%R 177
%I New York University, Courant Institute of Mathematical Sciences,
Department of Computer Science
%D SEP 1985
%K AI04
%A G. Landau
%A U. Vishkin
%T Efficient String Matching with k Differences
%R 186
%I New York University, Courant Institute of Mathematical Sciences,
Department of Computer Science
%D OCT 1985
%X If the mismatches considered are a single character mismatch,
a superfluous character in the text or pattern, there exists an
algorithm that runs in time O(m+k sup 2 n ) when the
alphabet size is fixed and O(m log m + k sup 2 n) otherwise
where m is length of pattern, k is the number of mismatches
and n is the text.
%A D. Leven
%A M. Sharir
%T On the Number of Critical Free Contacts of a Convex Polygonal
Object Moving in 2-D Polygonal Space
%R 187
%I New York University, Courant Institute of Mathematical Sciences,
Department of Computer Science
%D OCT 1985
%K AI07
%A J. Burdea
%A H. Wolfson
%T Automated Assembly of a Jigsaw Puzzle Using the IBM 7565 Robot
%R 188
%I New York University, Courant Institute of Mathematical Sciences,
Department of Computer Science
%D NOV 1985
%K AI07
%A E. Davis
%T Constraint Propagation on Real-Valued Quantities
%R 189
%I New York University, Courant Institute of Mathematical Sciences,
Department of Computer Science
%D NOV 1985
%K AI03
%A N. S. Sridharan
%T Representing Knowledge in Introduction using TAXMAN Examples
%R LRP-TR-12
%D 11/81
%I Rutgers University, Department of Computer Science
%R LRP-TR-13
%D 1/82
%T "A Computational Theory of Legal Argument"
%A L. T. McCarty
%A N. S. Sridharan
%I Rutgers University Department of Computer Science
%K AA24 tax
%X The TAXMAN project is an experiment in the application of artificial
intelligence to the study of legal reasoning and legal argumentation,
using corporate tax law as an experimental problem domain. Legal
concepts possess what is often termed "open-texture", that is, their
definitions are subject to a continual process of construction and
modification during the analysis of a contested case. We have
developed a "prototype-plus-deformation" representation for the
structure of such concepts, a representation which facilitates the
formulation of several systematic methods of conceptual modification.
We propose now to construct a cognitive model of the process of legal
argument, using this representation. The research is aimed at
developing explanations for the persuasiveness of certain strategies
of legal argument, and at developing further the criteria of
conceptual coherence, both task-specific and task-independent, which
seem to constrain the space of plausible arguments. We emphasize not
only the contributions of this research to Artificial Intelligence,
but also the insights that may result for some of the fundamental
issues in jurisprudence.
%R LRP-TR-14
%D 9/82
%T "A Flexible Structure for Knowledge"
%A N.S. Sridharan
%K AA24 tax AI04
%I Rutgers University Department of Computer Science
%X Concepts often dealt with in legal reasoning and argumentation are
amorphous. For TAXMAN II, we have proposed in the past a Prototype
and Deformation model for these amorphous concepts. In this model, a
concept is represented as a structured space of exemplars, that is as
a set of exemplars, structured by transformations and relationships
among them. In this paper, the idea of representing a concept as a
structured space of exemplars is extended; suggesting that all
knowledge represented in a computer be organized as structured spaces
and subspaces. Concepts are represented as spaces; concepts are also
members of spaces. This duality is exploited to gain flexibility in
the representation, that is, changes to the structure can be effected
through computation.
%R LRP-TR-15
%D 6/83
%T "Concept Learning by Building and Applying Transformations Between
Object Descriptions"
%A Donna Nagel
%K AI04 analogy matching
%I Rutgers University Department of Computer Science
%X The Concept Learning presented here emphasizes the building of a
transformation between an instance of a concept and another instance
which is distinguished as a prototype of the concept. A recursive
partial matcher is used to pinpoint components of structural object
descriptions of the training instances for matching. Three procedures
are described for inducing matches: building simple analogies,
applying primitive transformations, and finding projections of the
instances into domains of knowledge relevant to the concept being
learned. This research is experimental in nature and directed at
discovering flexible ways to define and represent concepts which are
amorphous and open-textured.
%R LRP-TR-16
%D 3/84
%T "EVOLVING SYSTEMS OF KNOWLEDGE"
%A N.S. Sridharan
%I Rutgers University Department of Computer Science
%K AI01
%X The enterprise of developing knowledge-based systems, is currently
witnessing great growth in popularity. The central unity of such
programs is that they interpret knowledge that is explicitly encoded
as @i[rules]. This paper is a statement of personal perspective by a
researcher interested in fundamental issues in the symbolic
representation and organization of knowledge. The discussion covers
the nature of rules (Sec. 3), and methods of rule-handling (Sec. 4).
The paper concludes with a discussion of how most concepts we use are
open-textured and how they continually evolve with use (Sections
5,6,7). While rule-based programming comes with certain clear
pay-offs, further fundamental advances in research is needed to extend
the scope of tasks that can be adequately represented in this fashion.
%R LRP-TR-17
%D 6/84
%T "Analogy with Purpose in Legal Reasoning from Precedents"
%A S.Kedar-Cabelli
%D 10/84
%I Rutgers University Department of Computer Science
%K AA24 taxman tax AA04
%X One open problem in current artificial intelligence (AI) models of
learning and reasoning by analogy is: which aspects of the analogous
situations are relevant to the analogy, and which are irrelevant? It
is currently recognized that analogy involves mapping some underlying
causal network of relations between situations [Winston 82], [Gentner
83], [Burstein 83a], [Carbonell 83]. However, most current models of
analogy provide the system with exactly the relevant relations,
tailor-made to each analogy to be performed. As AI systems become more
complex, we will have to provide them with the capability of
automatically focusing on the relevant aspects of situations when
reasoning analogically. These will have to be sifted from the large
amount of information used to represent complex, real-world
situations.
.sp 1
In order to study these general issues, we are examining a particular
case study of learning and reasoning by analogy: forming legal
concepts by legal reasoning from precedents. This is studied within
the TAXMAN II project, which is investigating legal reasoning using AI
techniques [McCarty & Sridharan 82], [Nagel 83].
.sp 1
In this dissertation proposal, we will discuss the problem and a
proposed solution. We examine legal reasoning from precedents within
the context of current AI models of analogy. We then add a focusing
capability. Current work on goal-directed learning [Mitchell 83a],
[Mitchell & Keller 83], and explanation-based learning [Dejong 83]
applies here: the explanation of how the precedent satisfies the
intent of the law (i.e. its goals, or purposes) helps to automatically
focus the reasoning on what is relevant.
------------------------------
End of AIList Digest
********************
∂15-Apr-86 0907 LAWS@SRI-AI.ARPA AIList Digest V4 #91
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Apr 86 09:07:17 PST
Date: Mon 14 Apr 1986 20:59-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #91
To: AIList@SRI-AI
AIList Digest Tuesday, 15 Apr 1986 Volume 4 : Issue 91
Today's Topics:
Bibliography - Recent Articles #6
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Articles #6
%R CTA-TR-2
%D 8/80
%T Computability on Binary Trees - An Extended Abstract
%A A. Yasuhara
%A F. Hawrusik
%A K.N. Venkataraman
%D 1/82
%I Rutgers University
%X We propose an effective method of computation on finite binary trees
that is analogous to the effective computation on the natural numbers
determined by the partial recursive functions. Not surprisingly, the
method is LISP-like. A finitely axiomatizable theory is given that is
shown to be just strong enough to represent the class of functions
computable by this method. Several natural subclasses; of this class
of functions are delineated and they are shown to be different from
one another.
%R CTA-TR-3
%D 3/81
%T Sub Classes of Programs for Computing on Binary Trees
%A K.N. Venkataraman
%D 1/82
%I Rutgers University
%K T01
%X Several sub-classes of the deterministic regular programs that compute
on binary trees are defined and relations of inclusion and inequality
among these classes in terms of functions computable (by these
programs) are established. Certain properties of these classes of
programs are studied. In particular the sets recognized by these
programs are characterized in terms of the domain and range of these
programs. Most of the results that appear in this paper can easily be
extended to programs computing on other recursively defined data
structures.
%R CTA-TR-4
%D 10/81
%T Decidability of the Purely Existential Fragment of the Theory of
Term Algebras
%A K.N. Venkararaman
%X This thesis is concerned with the question of the decidability and
the complexity of the decision problem for certain fragments of the
theory of free term algebras.
.sp 1
The existential fragment of the theory of term algebras is shown to be
decidable by presenting a non-deterministic algorithm which given a
quantifier free formula P, constructs a solution for P if it has one
and indicates failure if there are no solutions. A detailed proof of
the correctness of the algorithm is given. It is shown that the
decision problem is in NP by proving that if a quantifier-free formula
P has a solution then there is one that can be represented as a dag in
space at most cubic in the length of P. The decision problem is
shown to be complete for NP by reducing 3-SAT to that problem. It is
also shown that the @ @ @-[o] hierarchy over a term algebra
corresponds to the polynomial time hierarchy.
.sp 1
The proof of the fact that the introduction of the selector functions
into the first order language does not increase the complexity of the
existential fragment of the theory is indicated. Thus it is
established that the existential fragment of the theory of list
structures in the language of NIL, CONS, CAR, CDR, = , @u[<] is
NP-complete.
.sp 1
It is shown that the equivalence of PB[;@u{<}] straight line programs
is decidable follows easily from the decidability of the existential
fragment of the theory of list structures.
.sp 1
It is also shown that for any quantifier free formula P (in
the language of a term algebra) there is an algorithm which given a
recursive set S of cardinal numbers @u{<} @ @ @-[o], can decide
whether or not the number of solutions of P is in S.
%R ML-TR-1
%D 7/85
%T Purpose-Directed Analogy
%A Smadar Kedar-Cabelli
%I Rutgers University
%X Recent artificial intelligence models of analogical reasoning are
based on mapping some underlying causal network of relations between
analogous situations. However, causal relations relevant for the
purpose of one analogy may be irrelevant for another. We describe
here a technique which uses an explicit representation of the purpose
of the analogy to automatically create the relevant causal network.
We illustrate the technique with two case studies in which concepts of
everyday artifacts are learned by analogy.
%R ML-TR-2
%D 8/85
%T Explanation-Based Generalization: A Unifying View
%A T.M. Mitchell
%A R.M. Keller
%A S.T. Kedar-Cabelli
%X The problem of formulating general concepts from specific training
examples has long been a major focus of machine learning research.
While most previous research has focused on empirical methods for
generalizing from a large number of training examples using no
domain-specific knowledge, in the past few years new methods have been
developed for applying domain-specific knowledge to formulate valid
generalizations from single training examples. The characteristic
common to these methods is that their ability to generalize from a
single example follows from their ability to explain why the training
example is a member of the concept being learned. This paper
proposed a general, domain-independent mechanism, called EBG, that
unifies previous approaches to explanation-based generalization. The
EBG method is illustrated in the context of several example problems,
and used to contrast several existing systems for explanation-based
generalization. The perspective on explanation-based generalization
afforded by this general method is also used to identify open research
problems in this area.
%R RC-5882
%D February 1976
%A John Thomas
%T A Method of Studying Natural Language Dialogue
%I IBM Watson Research Center, User Interface Institute
%K AI02
%R RC-10823
%D November 1984
%A John Thomas:
%T Artificial Intelligence and Human Factors.
%I IBM Watson Research Center, User Interface Institute
%K AI08
%A Carbonell, Jaime
%T Derivational analogy: a theory of reconstructive problem solving
and
expertise acquisition.
%I Carnegie-Mellon University. Department of Computer Science.
%R CMU-CS-85-115.
%D 1985.
%K Case-based reasoning.
%A Kahn, Gary
%A McDermott, John
%T MUD: a drilling fluids consultant
%I Carnegie-Mellon University. Department of Computer Science
%R CMU-CS-85-116
%D 1985
%K Diagnostic systems, Knowledge acquisition AI01 AA03 AA21
%A Doyle, Jon
%T Reasoned assumptions and Pareto optimality
%I Carnegie-Mellon University. Department of Computer Science
%R CMU-CS-85-121
%D 1985.
%K Economic theory Group decision making Inference rules
Non-monotonic reasoning AA11
%A David M. McKeown, Jr
%A Pane, John F
%T Alignment and connection of fragmented linear features in aerial
imagery
%I Carnegie-Mellon University. Department of Computer Science
%R CMU-CS-85-122
%D 1985
%K Cultural features Feature extraction Image segmentation
Region interpolation Spline approximation AI06
%A Dill, David
%A Clarke, Edmund
%T Automatic verification of asynchronous circuits using temporal
logic
%I Carnegie-Mellon University. Department of Computer Science
%R CMU-CS-85-125
%D 1985
%K Circuit design
Timing constraints AA04 AI11
%A Lehr, Theodore
%T The implementation of a production system machine
%I Carnegie-Mellon University. Department of Computer Science
%R CMU-CS-85-126
%D 1985
%K Computer architecture OPS5 Performance improvement Production systems
RISCF Rete algorithm AI01
%A Minton, Steven
%T A game-playing program that learns by analyzing examples
%I Carnegie-Mellon University. Department of Computer Science
%R CMU-CS-85-130
%D 1985
%K Concept acquisition
Constraint based generalization
Forcing configurations
Learning from examples
Machine learning
Tactical combinations
Winning combinations
AI04 AA17
%A Fox, Mark
%A Wright, J. Mark
%A Adam, David
%T Experiences with SRL: an analysis of a frame-based knowledge
representation
%I Carnegie-Mellon University. Robotics Institute
%R CMU-RI-TR-85-10
%D 1985
%K Knowledge representation languages
%A Smith, Stephen
%A Ow, Peng Si
%T The use of multiple problem decompositions in time constrained
planning tasks
%I Carnegie-Mellon University. Robotics Institute
%R CMU-RI-TR-85-11
%D 1985
%K Job shop scheduling
%K Multi-agent planning systems
%K Resource allocation AI10
%A Brost, Randy
%T Planning robot grasping motions in the presence of uncertainty
%I Carnegie-Mellon University. Robotics Institute
%R CMU-RI-TR-85-12
%D 1985
%K Manipulators AI07 O04 AI09
%A Darlington,
%A Field, A.
%A Pull, H.
%T The unification of functional and logic languages
%I Imperial College of Science and Technology. Department of
Computing
%R Research report DOC 85/3
%D 1985
%K Functional programming
Reduction
Resolution AI10
%A Gregory, Steve
%A Neely, Rob
%A Ringwood, Graem
%T Parlog for specification, verification and simulation
%I Imperial College of Science and Technology. Department of
Computing
%R Research report DOC 85/7
%D 1985
%K PARLOG AI10 H03
%A Saint-Dizier, Patrick
%T On syntax and semantics of adjective phrases in logic
programming
%I Institut National de Recherche en Informatique et en Automatique
(INRIA)
%R Rapport de recherche 381
%D 1985
%K AI10
%A Deransart, Pierre
%A Maluszynski, Jan
%T Relating logic programs and attribute grammars
%I Institut National de Recherche en Informatique et en Automatique
(INRIA)
%R Rapport de recherche 393
%D 1985
%K Attribute dependency scheme Data flow analysis Logic programming AI10
%A Gazdar, Gerald
%A Pullum, Geoffrey K
%T Computationally relevant properties of natural languages and their
grammars
%I Stanford University. Center for the Study of Language and
Information
%R CSLI-85-24
%D 1985
%P 45
%K AI02
%A Fagin, Ronald
%A Vardi, Moshe
%T An internal semantics for modal logic: preliminary report
%I Stanford University. Center for the Study of Language and
Information
%R CSLI-85-25
%D 1985
%P 24p
%K AI10
%A Barwise, Jon
%T The situation in logic - III: simulation, sets and the axiom of
foundation
%I Stanford University. Center for the Study of Language and
Information
%R CSLI-85-26
%D 1985
%A van Benthem, Johan
%T Semantic automata
%I Stanford University. Center for the Study of Language and
Information
%R CSLI-85-27
%D 1985
%A Sells, Peter
%T Restrictive and non-restrictive modification
%I Stanford University. Center for the Study of Language and
Information
%R CSLI-85-28
%D 1985
%A Abadi, Martin
%A Manna, Zohar
%T Nonclausal temporal deduction
%I Stanford University. Department of Computer Science
%R STAN-CS-85-1056
%D 1985
%P 17p
%K Nonclausal resolution Propositional temporal logic
AI10 AI11
%A Mason, Ian A
%A Talcott, Carolyn L
%T Memories of S-expressions: proving properties of Lisp-like
programs that destructively alter memory
%I Stanford University. Department of Computer Science
%R STAN-CS-85-1057
%D 1985
%K Computations over memory structures Correctness proofs
Robson copying algorithm AI11 AA08
%A Taubenfeld, G
%A Francez, N
%T Proof rules for communication abstractions
%I TECHNION - Israel Institute of Technology. Department of Computer
Science
%R Technical report 332
%D 1984
%K Concurrent programming Deadlock Invariants Program verification
%K Scripts AA08
%A Shmueli, O
%A Tsur, S
%A Zfira, H
%T Rule supporting in PROLOG
%I TECHNION - Israel Institute of Technology. Department of Computer
Science
%R Technical report 337
%D 1984
%K T02
%A Shapiro, Ehud
%T A subset of Concurrent Prolog and its interpreter
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS83-06
%D 1983
%K T02 H03
%X "This is a revised version of technical report TR-003,
ICOT-Institute
for New Generation Computing Technology.";
%A Shapiro, Ehud
%A Takeuchi, Akikazu
%T Object oriented programming in Concurrent Prolog
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS83-08
%D 1983
%K H03 T02
%A Harel, David
%A Peleg, David
%T Process logic with regular formulas
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS83-11
%D 1983
%A Hellerstein, L
%A Shapiro, Ehud Y
%T Implementing parallel algorithms in Concurrent Prolog
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS83-12
%D 1983
%K T02 H03
%X Summary/draft, August 1983
%A Manna, Zohar
%A Pnueli, Amir
%T How to cook a temporal proof system for your pet language
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS83-13
%D 1983
%K AA08 AI11
%A Harel, David
%A Peleg, David
%T On static logics, dynamic logics and complexity classes
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS83-15
%D 1983
%K AI11
%A Feldman, Yishai A
%T A decidable propositional probabilistic dynamic logic
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS83-18
%D 1983
%K AI11
%A Barringer, Howard
%A Kuiper, Ruurd
%A Pnueli, Amir
%T Now you may compose temporal logic specifications
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS84-09
%D 1984
%K AI11
%A Shapiro, Ehud Y
%T The Bagel: a systolic Concurrent Prolog machine (lecture notes)
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS84-10
%D 1984
%K H03 T02
%A Peleg, David
%T Concurrent dynamic logic
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS84-14
%D 1984
%K T02 H03
%A Mierowsky, Colin
%T Design and implementation of flat Concurrent Prolog
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS84-21
%D 1984
%K H03 T02
%X Thesis (M.S.)
%A Bloch, Charlene
%T Source-to-source transformations of logic programs
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS84-22
%D 1984
%K AI10
%X Thesis (M.S.)
%A Viner, Omri
%T Distributed constraint propagation
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS84-24
%D 1984
%K H03
%X Thesis
%A Peleg, David
%T Concurrent program schemes and their logics
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS84-25
%D 1984
%K H03 T02
%A Lichtenstein, Orna
%A Pnueli, Amir
%T Checking that finite state concurrent programs satisfy their
linear specification
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS84-26
%D 1984
%K AA08
%A Nygate, Yossi
%T Python: a bridge expert on squeezes
%I Weizmann Institute of Science. Department of Applied Mathematics
%R CS84-27
%D 1984
%A Nixon, I. M.
%T I.F.: an Idiomatic Floorplanner
%I University of Edinburgh. Department of Computer Science
%R CSR-170-84
%D 1984
%K VLSI AA04
%A Sannella, Donald
%A Tarlecki, Andrzej
%T On observational equivalence and algebraic specification
%I University of Edinburgh. Department of Computer Science
%R CSR-172-84
%D 1984
%A Prasad, K. V. S
%T Specification and proof of a simple fault tolerant system in CCS
%I University of Edinburgh. Department of Computer Science
$R CSR-178-84
%D 1984
%K AA08 AI11
%A Blake, Andrew
%T Inferring surface shape by specular stereo
%I University of Edinburgh. Department of Computer Science
%R CSR-179-84
%D 1984
%K AI06
%A Dolan, Charles
%T Memory based processing for cross contextual reasoning: reminding
and
analogy using thematic structures
%I University of California, Los Angeles. Computer Science
Department
%R CSD-850010
%D 1985
%X Thesis (M.S.)
%A Hooper, Richard
%T An application of knowledge-based systems to electronic
computer-aided
engineering, design, and manufacturing data base transport
%I University of California, Los Angeles. Computer Science
Department
%R CSD-850011
%D 1985
%K AA05 AA04
%X Thesis (Ph.D.)
%A Rendell, Larry
%T Induction, of and by probability
%I University of Illinois, Urbana-Champaign. Department of Computer
Science
%R UIUCDCS-R-85-1209
%D 1985
%K Conceptual clustering Inductive inference AI04
Noise management Probabilistic learning systems
%A Rendell, Larry
%T Genetic plans and the probabilistic learning system: synthesis and
results
%I University of Illinois, Urbana-Champaign. Department of Computer
Science
%R UIUCDCS-R-85-1217
%D 1985
%K Conceptual clustering AI12 AI04
%A Anderson, James W.
%T Portable Standard LISP on the Cray
%I Los Alamos National Laboratory
%R LA-UR-84-4049
%D 1984
%K T01 H04 PSL
%A Arnon, Dennis S.
%T Supercomputers and symbolic computation
%I Purdue University. Department of Computer Sciences
%R CSD-TR-481
%D 1984
%K AI14 H04
%A J. Schwartz
%T A Survey of Program Proof Technology
%I New York University, Courant Institute, Department of Computer
Sciences
%D SEP 1978
%R 001
%K AA08 AI11
%A S. Stolfo
%A M. Harrison
%T Automatic Discovery of Heuristics for Non-Deterministic Programs
%D JAN 1979
%I New York University, Courant Institute, Department of Computer
Sciences
%R 007
%K AI04 AI03
%A M. Sharir
%T Algorithm Derivation by Transformations
%D OCT 1979
%I New York University, Courant Institute, Department of Computer
Sciences
%R 021
%K AA08
%A A. Walker
%T Syllog: A Knowledge Based Data Management Systems
%D JUN 1981
%I New York University, Courant Institute, Department of Computer
Sciences
%R 034
Sciences
%K AA09
%A J. Schwartz
%A M. Sharir
%T On the Piano-Movers Problem, I. Case of A Two Dimensional Rigid
Polygonal Body Moving Amidst Polygonal Barriers
%D OCT 1981
%I New York University, Courant Institute, Department of Computer
Sciences
%R 039 R1
Sciences
%K AI07
%A J. T. Schwartz
%A M. Sharir
%T On the Piano Movers Problem, II General Techniques for Computing
Topologic Properties of Real Algebraic Manifolds
%D FEB 1982
%R 041 R2
%I New York University, Courant Institute, Department of Computer
Sciences
%K AI07
%A J. Schwartz
%A M. Sharir
%T On the Piano Movers Problem III Coordinating the Motion of Several
Independent Bodies: The Special Bodies Moving Amidst Polygonal Bariers
%D SEP 1982
%R 052 r3
%I New York University, Courant Institute, Department of Computer
Sciences
%K AI07
%A C. O'Dunlaing
%A C. Yap
%T The Voronoi Diagram Method of Motion-Planning: I. The Case of a Disc
%D MAR 1982
%R 053 R4
%I New York University, Courant Institute, Department of Computer
Sciences
%K AI07
%A M. Sharir
%A E. Azriel-Sheffi
%T On the Piano Movers Problem IV Various Decomposable Two-Dimensional
Motion Plannings Problems
%D FEB 1983
%I New York University, Courant Institute, Department of Computer
Sciences
%R 058 R6
%K AI07
%A J. Schwartz
%T Structured Light Sensors for 3-D Robot Vision
%D MAR 1983
%R 065 R8
%I New York University, Courant Institute, Department of Computer
Sciences
%K AI06 AI07
%A C. Yap
%T Complexity of Motion Coordination
%R R12
%I New York University, Courant Institute, Department of Computer
Sciences
%K AI07
%A J. Schwartz
%A M. Sharir
%T On the Piano Movers Problem: V. The Case of a Rod Moving in
Three Dimensional Space Amidst Polyhedral Obstacles
%R 083 R13
%I New York University, Courant Institute, Department of Computer
Sciences
%D JUL 1983
%K AI07
%A R. Cole
%A C. Yap
%T Shape from Probing
%R 104 R15
%I New York University, Courant Institute, Department of Computer
Sciences
%D OCT 1983
%K AI07 AI06
%A J. Schwartz
%A M. Sharir
%T Some Remarks on Robot Vision
%R 119 R25
%I New York University, Courant Institute, Department of Computer
Sciences
%D APR 1984
%K AI07 AI006
%A C. Bastuscheck
%A J. Schwartz
%T Preliminary Implementation of a Ratio Depth Sensor
%R 124 R28
%I New York University, Courant Institute, Department of Computer
Sciences
%D JUN 1984
------------------------------
End of AIList Digest
********************
∂15-Apr-86 2313 LAWS@SRI-AI.ARPA AIList Digest V4 #92
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 Apr 86 23:12:52 PST
Date: Tue 15 Apr 1986 20:33-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #92
To: AIList@SRI-AI
AIList Digest Wednesday, 16 Apr 1986 Volume 4 : Issue 92
Today's Topics:
Query - KEE Experiences,
Seminars - Cortical Activity for Conscious Sensory Experience (UCB) &
Inexact Reasoning Using Graphs (UTexas) &
Prolog: Application to Design Verification (SU) &
An Application of Machine Self-Reflection (SUNY-Buffalo),
Conferences - ACL Annual Meeting &
II Finish AI Symposium (STeP 86)
----------------------------------------------------------------------
Date: Mon, 14 Apr 86 10:04:34 est
From: jcm@ORNL-MSR.ARPA (James A. Mullens)
Subject: KEE experiences
(Posted for a friend)
As part of a class in expert systems at the University of Tennessee
I am preparing a report on KEE. I thought it would be interesting to
include the reactions/experience of the users of KEE. Any comments
would be greatly appreciated.
I also have a more practical interest in any response I might get
because the Dept. of Nuclear Eng., recently purchased KEE.
You can respond privately to jcm@ornl-msr.arpa if you wish. The
report will be available to the network. Contributors will be
identified unless they request otherwise.
Thanks in advance,
Ray Brittain
------------------------------
Date: Mon, 14 Apr 86 14:58:30 PST
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Subject: Seminar - Cortical Activity for Conscious Sensory Experience (UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Spring 1986
Cognitive Science Seminar - IDS 237B
Tuesday, April 22, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``Cortical activity required for a conscious sensory experience,
with cognitive implications''
Benjamin Libet
Physiology, UC San Francisco
Abstract
Experiments involving direct electrical stimulation and
recordings in the cerebral somatosensory system of awake human
patients have indicated that a substantial period of activity
(up to 500 msec+/-) is required to elicit a sensory experience.
More indirect evidence supports this requirement for brief
peripheral inputs as well. However, subjective timing of the
experience is "antedated" back to the time of the initial
fast-arriving signal. This hypothesis of "neuronal delay plus
subjective antedating" for a conscious sensory experience has
important implications for the processing of conscious and
unconscious sensory functions.
------------------------------
Date: Mon, 14 Apr 86 19:04:08 cst
From: kumar@SALLY.UTEXAS.EDU (Vipin Kumar)
Subject: Seminar - Inexact Reasoning Using Graphs (UTexas)
University of Texas
Computer Sciences Department
COLLOQUIUM
SPEAKER: Judea Pearl
University of California, Los Angeles
TITLE: Inexact Reasoning Using Graphs
DATE: Thursday, April 17, 1986
PLACE: WEL 3.502
TIME: 4:00-5:00 p.m.
In order to meet requirements of modularity, transparency
and flexibility, the designers of 1st-generation expert systems
have abandoned traditional probability theory and have ventured
to devise new formalisms for managing uncertainties. The talk
will describe a message-passing scheme in propositional networks
which, using traditional probability theory, fulfills these ob-
jectives of experts systems technology.
The first part of the talk will stress the relationship
between TRANSPARENCY and reasoning with GRAPHS. We will examine
what kind of inferential dependencies are representable by
graphs, and will compare the properties of two such representa-
tions: Markov Networks and Bayes Networks.
The second part will describe a distributed scheme for
coherently propagating beliefs in Bayes Networks. It facilitates
flexible control strategies and sound explanations, it supports
both predictive and diagnostic inferences, and it is guaranteed
(in sparse graphs) to converge in time proportional to the
network's diameter.
COFFEE AT 3:30 in TAY 3.128
------------------------------
Date: Mon 14 Apr 86 19:31:12-PST
From: Christine Pasley <pasley@SRI-KL>
Subject: Seminar - Prolog: Application to Design Verification (SU)
CS529 - AI In Design & Manufacturing
Instructor: Dr. J. M. Tenenbaum
Title: Prolog: Application to Design Verification
Speaker: Harry G. Barrow
From: Schlumberger Palo Alto Research
Date: Wednesday, April 16, 1986
Time: 4:00 - 5:30
Place: Terman 556
PROLOG is a programming language based upon predicate logic. It was
developed in Europe, where it is widely used, and subsequently adopted
in Japan as a basis for much of the "Fifth Generation" research and
development.
At SPAR, we have been developing a program called VERIFY, written in
PROLOG, that attempts to prove correctness of digital hardware designs.
VERIFY first derives a description of the behavior of the whole design
from the behavior of its components and the way they are
interconnected. The derived behavior description is then shown to be
equivalent (or not) to the intended behavior given in a specification.
VERIFY has successfully verified large designs involving many thousands
of transistors in just ten minutes.
Visitors welcome!
------------------------------
Date: Mon, 14 Apr 86 16:32:25 EST
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - An Application of Machine Self-Reflection (SUNY-Buffalo)
UNIVERSITY AT BUFFALO
STATE UNIVERSITY OF NEW YORK
DEPARTMENT OF
COMPUTER SCIENCE
COLLOQUIUM
JOHN CASE
Department of Computer Science
University at Buffalo
ANSWERING THE MATHEMATICAL OBJECTION TO MACHINE INTELLIGENCE:
AN APPLICATION OF MACHINE SELF-REFLECTION
We briefly consider the standard paradox in the notion of a
machine ``having'' a complete model of itself and show how
to circumvent it. Then we pictorially present a simple
theoretical application of machine self-reflection and use
this application as a vehicle to illustrate what Turing
called the mathematical objection to machine intelligence.
Lastly, we employ machine self-reflection to completely
answer this objection.
Thursday, April 17, 1986
4:00 P.M.
Bell 338, Amherst Campus
Coffee and doughnuts will be served at 3:30 P.M., 224 Bell Hall
For further information, call (716) 636-3181.
------------------------------
Date: Tue, 15 Apr 86 21:24:04 est
From: walker@mouton.bellcore.com (Don Walker at mouton.bellcore.com)
Subject: Conference - ACL Annual Meeting
ACL Annual Meeting, 10-13 June, Columbia University, New York City
The Program and Registration Information brochure is just being mailed
to all ACL members and to selected members of AAAI and LSA. If you are
not sure you will be receiving it and would like a net copy, send a
message to one of the addresses below; and I will (try to--even
electronic mail is not always reliable) send you one. Please include
the phrase "ACL net info" in the subject line. And include the
full net address in the body of the message; the complexity of
network connections coupled with the poverty of our mail system
sometimes makes "replies" unsendable.
The file has about 20,000 characters; it contains the full program
(33 papers; an invited presentation by Gary Hendrix; two forums, one
on Connectionism with Terry Sejnowski and Dave Waltz, the other
on Machine Translation with Martin Kay and Maghi King);
descriptions of the 6 tutorials (Intro to Computational Linguistics,
Natural Language Generation, Structuring the Lexicon, Recent
Developments in Syntactic Theory and Their Computational Import,
Current Approaches to Natural Language Semantics, and Machine
Translation--all held on 10 June); registration information and
directions; and an Application Form that can be printed out,
filled in (or filled in, printed out), and mailed in. Inexpensive
air-conditioned dormitory accommodations are available, and some
good rates for hotels have been secured. We are still encouraging
people who would like to exhibit or demonstrate programs to contact
Ralph Grishman (Computer Science, New York University, 251 Mercer
Street, New York, NY 10012; 212:460-7492; grishman@nyu.arpa).
Don Walker
walker@mouton.arpa
walker%mouton@csnet-relay
{ucbvax, ihnp4, ...}!bellcore!walker
address mail to:
Donald E. Walker (ACL)
Bell Communications Research
445 South Street, MRE 2A379
Morristown, NJ 07960, USA
201:829-4312
ACL Annual Meeting, 10-13 June 1986, Columbia University, New York City
------------------------------
Date: Tue, 15 Apr 86 17:16:37 -0200
From: mit%hut.UUCP%fingate.bitnet@WISCVM.WISC.EDU (Markku Tamminen)
Subject: Conference - II Finish AI Symposium (STeP 86)
CALL FOR PAPERS
Deadline May 30
STeP 86 - II Finnish Artificial Intelligence Symposium
Helsinki University of Technology, Otaniemi, Espoo, Finland
August 20-22, 1986
The Second Finnish Artificial Intelligence Symposium will be or-
ganized by the Finnish Society of Information Processing Science
and the Helsinki University of Technology.
The Symposium is to provide an overview of the research and
development that has taken place since STeP 84. Papers are re-
quested on all aspects of artificial intelligence. The contribu-
tions will be published as the STeP 86 Proceedings and distribut-
ed to particiants of the symposium.
Please send an abstract (no longer than one page) by May 30. The
program committee will inform you about its decicions by June 15.
Final camera-ready copy of papers corresponding to 30 minute
talks will be required by July 31. The formatting conventions
will be sent separately to authors.
Tutorials will be held at the start of the symposium, and propo-
sals for them are also solicited.
Signed
Markku Syrjaenen Jouko Seppaenen
Head of Program Committee Head of Organizing Committee
Please use one of the following adresses for submitting the
abstract, and for any queries:
BITNET, EARNET: mit%hut.uucp@fingate
ARPANET: mit%hut.uucp%fingate.bitnet%cernvax.bitnet@wiscvm.wisc.edu
Please use uucp only if the above nets not available:
{seismo!mcvax, enea!tut}!penet!hut
Non-electronic mail:
STeP 86
c/o Jouko Seppaenen
Computing Centre
Helsinki University of Technology
SF-02150 Espoo 15
Finland
APPENDIX
Examples of topics suited for papers:
- Theoretical foundations - Expert systems
- Knowledge representation - Tools of knowledge engineering
- Problem solving methods - Languages (Lisp, Prolog etc.)
- Searching and planning - Programming techniques
- Logic programming - AI workstations, environments etc.
- Pattern recognition, vision - Industrial applications, robotics etc
- Natural language, speech - Applications to management
- Cognitive modeling - Applications to education
- Knowledge acquisition, learning - AI and arts
Markku Tamminen
Helsinki University of Technology
Laboratory of Information Processing Science
02150 ESPOO 15
FINLAND
Tel: 358-0-4512020 (460144)
ARPANET: mit%hut.uucp%fingate.bitnet%cernvax.bitnet@wiscvm.wisc.edu
BITNET: mit%hut.uucp@fingate
------------------------------
End of AIList Digest
********************
∂18-Apr-86 0117 LAWS@SRI-AI.ARPA AIList Digest V4 #93
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 18 Apr 86 01:16:49 PST
Date: Thu 17 Apr 1986 22:14-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #93
To: AIList@SRI-AI
AIList Digest Friday, 18 Apr 1986 Volume 4 : Issue 93
Today's Topics:
Queries - Machine Translation & Nontrivial Expert Systems,
Representation - Shape,
Project - Real-Time Machine Learning,
News - Max the Robot,
Review - Canadian AI Newsletter, March 1986
----------------------------------------------------------------------
Date: Wed, 16 Apr 86 08:42:55 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: Points Arising.
Re: Computer Dialogue (Vol 4 # 57 & 58).
I wonder how (many) psychological "strokes" were exchanged
in this conversation?
Re: Machine Translation (Vol 4 # 67 & 70).
Impressive claims are made for machine translation systems;
are there any systems that could produce a precis (summary)
of a large document?
Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj
"They who often look far into the distance have excellent vision."
------------------------------
Date: 16 Apr 86 04:11:07 GMT
From: ihnp4!lzaz!psc@ucbvax.berkeley.edu (Paul S. R. Chisholm)
Subject: Non-trivial expert systems
< The electronic funds transfer is in the electronic mail. . . . >
For those of you who remember my Usenet posting in November, I'm
*still* looking for MS-DOS based expert systems for a review. I'll
shortly post a much longer list of such packages. For the moment, I
want to deal with something more fundamental.
I want to show the difference between an expert system and a simple
decision tree. Yet most of the examples (and the one software package)
I've seen, there *is* no difference . . . or at least, the one can be
transformed into the other.
Nearly all the expert system shells are actually based on
productions: if <foo> and <bar> and ... then <glarch>. The conditions
can be arbitrarily complicated, but usually involve testing "global
variables" against constant values, and possible a bit of trivial
arithmetic. The consequent assert that yet another global variable has
some constant value. There are wrinkles: the most common is having
variables that are "local" to a rule. This isn't strictly necessary,
but saves a lot of tedious, repetitious rule writing.
In theory, it's always possible to treat each production as a node,
and construct a tree of questions without knowing any of the answers
ahead of time. This disturbs me, though I realize generating a
meaningful tree is nontrivial.
In practice, damn near *everyone* draws that tree first, then
writes the rules. This is missing the point! If your "expert"
knowledge is that trivial, you don't need logic, just a branch follower.
I tried drawing a "subway network", and writing rules of how to get
from one station to another. This isn't very instructive: forward
chaining doesn't find anything like an optimal solution, and backwards
chaining takes every damn trip. (*sigh* - Can you tell my first AI
course was taught out of Nilsson's PROBLEM-SOLVING METHODS IN ARTIFICIAL
INTELLIGENCE? Nilsson thought all AI reduced to search.)
I've seen one sample expert system that didn't reduce to a tree,
and I'm not at all sure simple shells can solve it! C. J. Culbert (sp?)
of NASA sent me a wonderful "monkey and banana" system that requires
about eighty inferences. The solution involves things like "move the
ladder under the red box, climb up and get the green key out, climb down
and move the ladder under the green box, climb up, unlock the green box
with the green key, get the blue key . . ." Very nice, but the expert
system shells I've seen can't handle time, e.g., "first move the ladder
to A, then once you've finished this subtask, move the ladder to B".
Once a value (e.g., ladder location) is deduced, it's hard or impossible
to change. "Undoing" isn't always kosher either: if I have a glass of
milk, I can quench my thirst or make butter, but once I've done one . . .
What are my points?
+ First and foremost, I'd like a expert system that can be solved with
simple productions. It shouldn't be an example provided by a vendor I'll
review; that'd potentially give his or her product an edge.
+ Second, I'd like some reassurance that production-based expert
systems go beyond decision tree programs. Please don't flame to the net
on this one. I'm posting to both Usenet and Arpanet groups; if you send
me mail (I'm reachable from both), I'll summarize and repost.
Sorry to ramble, thanks in advance for your help, and I'll post the
MS-DOS expert systems as soon as I can.
--
-Paul S. R. Chisholm, UUCP {ihnp4,cbosgd,pegasus,mtgzz}!lznv!psc
AT&T Mail !psrchisholm, Internet mtgzz!lznv!psc@topaz.rutgers.edu
The above opinions may not be shared by any telecomm company.
------------------------------
Date: Sun, 13 Apr 86 11:45:51 EST
From: Jim Hendler <hendler@brillig.umd.edu>
Subject: shape
Ken-
The work done so far on shape falls loosely into the classes of
``solid modeling'' and ``functionality.'' The questions you ask in the
AI digest were more from the solid modeling camp -- i.e. Can I put this
peg in this hole? I'm not incredibly familiar with that literature, but some
recent work in automated manufacturing (a big buzzword these days) has included
this type of work. Dana Nau (dsn@maryland) has a paper which will be appearing
in a new journal (I forget the name, it's one of the new ``applied AI''
journals) which discusses a frame based approach to solid modeling. He also
has several tech. reports at U of Maryland on such a topic.
The functionality work is more my line (I have a couple of students starting
thesis work on it now). This research has traditionally focused on
recognizing objects from functional information (i.e.
from the dialogue
John: Do you know the time
Mary: 1 PM
we infer that Mary has a watch)
I'm looking into expanding it in a direction that starts to interact more
with the solid modeling stuff. I'd like a description of a ``cleaver''
such that I could INFER that it is a weapon. At the moment most systems
(including my own planners) must have this information stored explicitely.
It is this my students are now looking into.
Hope this is of some help
Jim Hendler
Ass't Professor
University of Maryland
Hendler@maryland
------------------------------
Date: Wed, 16 Apr 86 13:28:00 est
From: Stanley Letovsky <letovsky@YALE.ARPA>
Subject: shape
To: laws@sri-iu
Ernie Davis at NYU was working on this problem; I read a
manuscript of his entitled something like "Buttons, Rakes and Rings"
last year. I don't know if he published it anywhere but you might ask
him what the status of the work is. He was trying to define an ontology
for qualitative and loose quantitative reasoning about shape. -Stan
------------------------------
Date: Thu, 17 Apr 86 13:45:04 PST
From: Scott Turner <srt@LOCUS.UCLA.EDU>
Reply-to: srt@ucla-cs.UUCP (Scott Turner)
Subject: Re: Shape
Jack Hodges of the UCLA Artificial Intelligence Laboratory is working on
EDISON, a program that invents mechanical devices. Jack is in England this
week presenting his work at a conference, so I'm standing in for him in
presenting some references that might be useful for understanding "hooks
and rings".
First of all, the Edison project looks at naive invention: the kind of
tinkering that backyard inventors or (one supposes) children do. There is
no complex of mathematical forces in the project. It focuses instead on
the issues of creativity and problem-solving. How ←does← one get that
great idea? The reference:
EDISON: An Engineering Design Invention System Operating Naively,
Hodges, Dyer, Flowers, Tech Report UCLA-AI-85-20, Dec. 1985
There has been a lot of work done on naive physics. The reference I'm
aware of is:
Hayes, P.J., "The Second Naive Physics Manifesto," pp. 467-486
in ←Readings in Knowledge Representation←, ed. Brachman & Levesque,
Morgan Kaufman, 1985
There has been some work done on object representation, primarily:
Lehnert, W.G., ←The Process of Question Answering←, LEA 1978.
(see Chapter 9).
Rieger, C. "An Organization of Knowledge for Problem Solving and
Language Comprehension", pp. 487-508, ←Readings in Knowledge
Representation←...
Wasserman, K. and Lebowitz, M., "Representing Complex Physical
Objects," ←Cognition and Brain Theory←, 6(3), pp. 259-285 (1983)
Finally, for the particular area of children's problem solving:
DeBono, E., ←Children Solve Problems←, Penguin, NY 1980.
And not to overlook work by Forbus, DeKleer and Brown, though I won't
bother to type in the cites.
That should get you started.
Scott R. Turner
ARPA: (now) srt@UCLA-LOCUS.ARPA (soon) srt@LOCUS.UCLA.EDU
UUCP: ...!{cepu,ihnp4,trwspp,ucbvax}!ucla-cs!srt
FISHNET: ...!{flounder,crappie,flipper}!srt@fishnet-relay.arpa
------------------------------
Date: Wed, 16 Apr 86 15:46:06 pst
From: malkoff@nprdc.arpa (Don Malkoff)
Subject: real-time machine learning
The "REAL-TIME MACHINE LEARNING LABORATORY" has been established
at the Navy Personnel Research and Development Center, Code 71,
San Diego, CA 92152-6800.
Ongoing work includes:
1. Real-time fault detection and diagnosis in complex control
systems, involving random time variability, and
2. Automated sonar detection and classification.
These and other related project areas make use of machine learning
techniques.
For information contact Don Malkoff, (619) 225-6617.
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Max the Robot
Source: KRLD News, Dallas
A man, for unknown reasons, baricaded himself into an apartment. Since he
had a history of explosive law violations, the police did not want to
enter the apartment. They did not even know if he was still alive as
he was not talking to them and a friend said he was extremely depressed.
They sent in Max the Robot, a tank-like entity complete with camera and
manipulator. It smashed through the window and pushed a drape aside
when the man, in astonishment, left the apartment peacefully without
any shots being fired. [...]
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Review - Canadian AI Newsletter, March 1986
Canadian Artificial Intelligence March 1986 Issue Number 7
Biographies of new officers, letter from someone protesting the research
effort into emulating the brain.
Discussion of Intelligent Computer-Aided Instruction research in
Canada, bemoaned the scarcity of AI researchers in Canada with psychology
training.
Specific Projects:
University of Alberta: automate the teaching of statistics first course
Bell Northern Research: modelled personality interactions between expert
and builder and developed an artificial expert in horticulture
University of Calgary: training materials for medical students including
the use of videodisc
University of Calgary: user modelling project
Concordia University: application of Pask's "Conversation Theory" to
team decision support and a course assembly and tutorial environment
which runs on Apples
ForceTen enterprises: expert system and natural language interface
to be part of their courseware development system. The product runs
on IBM PC's
National Research Council: Project to develop adaptive computer-
based training with natural language interface.
University of Saskatchewan: Project to develop Lisp teacher and
conception corrector, programming environment for first year students
University of Waterloo: attempt to model students in a manner independent
of system used
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
The Canadian Society for Fifth Generation Research has signed an
agreement of understanding with the Japanese ICOT for exchange of
technical informaiton and research meetings. This is the first
collaboration that ICOT has signed with a foreign organization.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Machine Vision International has established its head office in Ottawa.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
"Interest in artifical intelligence and expert systems is relatively
new in Canada"
From a Canadian government report on expert systems by the Office of
Industrial Innovation, Department of Regional Industrial Expansion
October 1985
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Reviews of
Artificial Intelligence: A Personal, Commonsense Journey by
William R. Arnold and John S. Bowie (an introduction to AI
for lay readers but got a poor review.)
Progress in Artifical Intelligence by Luc Steels and John A. Campbell
(collection of papers from the 1982 European Conference on artificial
intelligence)
(Some short reviews as well)
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
List of AI Tech reports from Canadian Universities
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Summary of the third and fourth University of Waterloo-University of
Ontario AI workshops , Queen's University Expert System Workshop
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Obituary for Paul A. Kolers, a researcher in psychology of visual perception
------------------------------
End of AIList Digest
********************
∂18-Apr-86 0430 LAWS@SRI-AI.ARPA AIList Digest V4 #94
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 18 Apr 86 04:30:09 PST
Date: Thu 17 Apr 1986 22:40-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #94
To: AIList@SRI-AI
AIList Digest Friday, 18 Apr 1986 Volume 4 : Issue 94
Today's Topics:
Philosophy - Consciousness
----------------------------------------------------------------------
Date: 14 Apr 86 07:44:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: More wrangling on consciousness
> >me: Briefly, we believe other people are conscious
> >for TWO reasons: 1) they are capable of certain clever activities,
> >like holding English conversations in real-time, and 2) they
> >have brains, just like us, and each of us knows darn well that
> >he/she is conscious.
>
> Nigel Goddard: Personally I think that the only practical
> criterion (i.e. the ones we use when judging whether this
> particular human or robot is "conscious") are performance ones.
> Is a monkey conscious ?. If not, why not ? There are people I
> meet who I consider to be very "unconscious", i.e. their stated
> explanations of their motives and actions seem to me to
> completely misunderstand what I consider to be the *real*
> explanations. Nevertheless, I still think they are conscious
> entities, and the only way I can rationalize this paradox is
> that I think they have the ability to learn to understand the
> *real* reasons for their actions. This requires an ability to
> abstract and to make an internal model of the self, which may be
> the main factors underlying what we call consciousness.
At the technical level, I think it's simply wrong to dismiss
brains as a criterion for consciousness - if mechanism M
causes C (consciousness) and enables P (performance), then
clearly it is an open question whether something that can do P,
but does not have M, does or does not have C.
At the "gut" level I think the whole tenor of the reply misses
the point that consciousness is a very "low-level", primitive
sort of phenomenon. Do severely retarded persons have "the
ability to learn to understand the *real* reasons for their
actions...an ability to abstract and to make an internal model
of the self" ? or cows, or cats? Yet no one, I hope, doubts
that they are conscious (eg, can feel pain, experience shapes,
colors, sounds). This has very little to do with any clever
information processing capabilities. And it is these "raw
feelings" that a) are essential to what most people mean by
consciousness and b) seem least susceptible to implementation by
Lisp machines, regardless of size.
John Cugini <Cugini@NBS-VMS>
------------------------------
Date: 13 Apr 86 09:50:25 GMT
From: ihnp4!houxm!hounx!kort@ucbvax.berkeley.edu (B.KORT)
Subject: Re: Computer Dialogue
I think Paul King is right on the mark in his comments about the nature
of feelings, instincts, and conscious awareness. Paul's point about
a system having a world-model which includes the system itself as an
entity within that world model is perhaps the most salient point in
his article. Self-diagnosis, self-reconfiguration, and self-repair
are already found in complex computer installations. Self-perpetuation
is the higher-level goal of those three capabilities. The first
industrial robots were put to work to build--you guessed it--more
industrial robots. So we have self-reproduction, as well. In the
case of industrial robots, evolution is speeded up by the hand of
the creator, who introduces new models through intelligent intervention.
We no longer have to wait for a serendipitous random perturbation to
yield a more successful offspring. In my original Computer Dialogues
#1 and #2, I playfully introduced a pair of self-programming computers
who gradually developed a protocol for mutual self-learning. I think
it may be possible, by the end of the millenium, to create the first
rudimentary Artificial Sentient Being.
--Barry Kort ...ihnp4!houxm!hounx!kort
------------------------------
Date: 13 Apr 86 22:07:13 GMT
From: tektronix!uw-beaver!ssc-vax!eder@ucbvax.berkeley.edu (Dani Eder)
Subject: Re: Computer Dialogue
> Before the recent tragedy, there had been a number of
> instances where the space shuttle computers aborted the mission in the
> final seconds before launch. My explanation for this was that the
> on-board computers were displaying a form of 'programmed survival
> instinct.' In short: they were programmed to survive, and if the
> launch had continued, they might not have.
>
In almost every countdown there have been delays because some
measured parameter of the vehicle was out of tolerance. The ground
launch sequencer, which controls events from t-9 minutes to t-25 seconds,
and the onboard computers, which control events in the last 25 seconds,
are required because there are too many time critical events for humans
to handle. They command a series of actions, such as opening a valve, and
take measurements from sensors, such as the temperature in the
combustion chamber. When a sensor reading is outside allowable limits,
the software stops the countdown and attempts to return the vehicle to
a 'safe' condition.
Earlier in the countdown, events occur at a slower pace, and humans
monitoring the data coming from the sensors have often called a halt to
the operation. The Shuttle system, men and machines, is designed to operate
under the rule 'do not launch unless all the data says it is safe to do so'.
Because the early 1970's technology used in the Shuttle is marginal for
a reuseable transportation system, EVERYTHING has to be working just right
for a successful launch.
The computers used onboard the Shuttle are too dumb even to be programmed
for survival. If there is an in-flight abort that requires returning to the
ground from halfway to orbit, the pilot must turn a rotary switch on the
console to choose between returning to Florida and landing in Senegal. The
switch controls loading of data and routines into the computers. This was
required because the software for flying the Shuttle runs ~500k of code, and
the computers can only handle 64k. The decision routines for which part of
the software to swap in were left in the pilots head.
Dani Eder/Advanced Space Transportation/Boeing/ssc-vax!eder
------------------------------
Date: 14 Apr 86 19:26:11 GMT
From: decvax!linus!faron!rubenk@ucbvax.berkeley.edu (Ruben J. Kleiman)
Subject: Re: Natural Language processing
In article <3500011@uiucdcsp> bsmith@uiucdcsp.CS.UIUC.EDU writes:
>
>You are probably correct in your belief that Wittgenstein is closer to
>the truth than most current natural language programming. I also believe
>it is impossible to go through Wittgenstein with a fine enough toothed
>comb. However, there are a couple of things to say. First, it is
>patently easier to implement a computer model based on 2-valued logic.
>The Investigations have not yet found a universally acceptable
>interpretation (or anything close, for that matter). To try to implement
>the theories contained within would be a monumental task. Second, in
>general it seems that much AI programming starts as an attempt to
>codify a cognitive model. However, considering such things as grant
>money and egos, when the system runs into trouble, an engineering-type
>solution (ie, make it work) is usually chosen. The fact that progress
>in AI is slow, and that the great philosophical theories have not yet
>found their way into the "state of the art," is not surprising. But
>give it time--philosophers have been working hard at it for 2500 years!
>
>Barry Smith
Whoever believes that "engineering-type solution[s]" are the consequence
of small grants or large egos:
1. should be able to conceive of an implementation of some concept (or the
concept of an implementation) which does not involve
"engineering-type solutions."
2. should NOT be able to give form to the notion of a "lag"
between research ("great philosophical theories") and
implementation ("state of the art").
- Ruben
------------------------------
Date: Tue, 15 Apr 86 15:08:54 EST
From: tes%bostonu.csnet@CSNET-RELAY.ARPA
Subject: please include
In volume 4, Issue 87, Ray Trent wrote
> Please define this concept of "consciousness" before
> using it. Please do so in a fashion that does not resort
> to saying that human beings are mystically different
> from other animals or machines. Please also avoid self-
> important definitions. (e.g. consciousness is what humans
> have)
...
> The above request also applies to the term "desire".
...
> My definition of these concepts ["desires" and "feelings"]
> would say that they "are" the actions that a life process
> take in response to certain stimuli.
Bravo, but with qualification:
Mr. Trent has the "physicalist" point of view, which recognizes ONLY
objects and phenomena describable in the "language of science" (what
I mean by this is the language that deals exclusively with molecules,
gravity, velocity, entropy, etc.).
This general view of the universe is FINE, and it's obviously powerful
and result-oriented (look at what we have done with it in the last
few hundred years). BUT - this view is not the only one, or the
"right" one, in any sense.
I'll bet that the term "consciousness" is undefinable in the language of
science, and therefore useless to the physicalists. (I have a hunch
that physicalists cannot get any further than a behaviorial or
mechanistic description of conscious beings). Therefore,
in discussions about the mind like the kind that is going on in AIList,
perhaps one thing that should be made clear by each participant is whether
he or she is assuming the "scientific" or some "non-scientific" viewpoint.
If one is to adopt the physicalist approach, I agree with Ray Trent that
terms like "desire" and "feeling" and "consciousness" can only be used
if they have been (sorry if I'm putting words into Ray's mouth here)
mechanistically defined.
Tom Schutz
CSNET: tes@bu-cs
ARPA: tes%bu-cs@csnet-relay
UUCP: ...harvard!bu-cs!tes
------------------------------
Date: Tue, 15 Apr 86 15:29:13 EST
From: tes%bostonu.csnet@CSNET-RELAY.ARPA
Subject: One more little thing
Just a brief question here:
Nigel Goddard wrote in Volume 4 Issue 87
> I meet [people] who I consider to be very "unconscious",
> i.e. their stated explanations of their motives and actions
> seem to me to completely misunderstand what I consider to
> be the *real* explanations.
What, by Jove, is a "*real* explanation" ??????????????????????
I can't digest my food properly until I find out.
Tom Schutz
CSNET: tes@bu-cs
------------------------------
Date: 14 Apr 86 13:19:55 GMT
From: ihnp4!houxm!hounx!kort@ucbvax.berkeley.edu (B.KORT)
Subject: Re: Computer Dialogue
I enjoyed Ray Trent's rejoinder to Paul King's article on computer
feelings and self-awareness. In particular the description of the
relational database system--as an entity that collects and organizes
information into an abstract model that it then uses to interact with
the world--was most suggestive. Now if we give that system some further
rules of logic and assign it some goals, could we turn it into a "rational
database system"? (I would give it the goal of nudging the external world
into one which operates more successfully than the current implementation.)
--Barry Kort ...ihnp4!houxm!hounx!kort
------------------------------
End of AIList Digest
********************
∂21-Apr-86 0157 LAWS@SRI-AI.ARPA AIList Digest V4 #95
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 21 Apr 86 01:56:25 PST
Date: Sun 20 Apr 1986 23:17-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #95
To: AIList@SRI-AI
AIList Digest Monday, 21 Apr 1986 Volume 4 : Issue 95
Today's Topics:
Seminars - Learning Robots, Approximate Theories (Rutgers) &
A Localized Model of Concurrency (SRI) &
Analogical Representations in Naive Physics (Edinburgh) &
Learning Apprentice Systems (SU) &
A Theory of Diagnosis (SU) &
A Formal Logic for Planning (UPenn) &
Editorial Comprehension in Op-Ed (UTexas) &
Run-Length Code for Geographical Information (SMU),
Conference - Symbolics User Group National Symposium &
Workshop on Engineering Design &
ACM SIGMOD & Design Automation & Computers and Math
----------------------------------------------------------------------
Date: 15 Apr 86 11:03:24 EST
From: PRASAD@RED.RUTGERS.EDU
Subject: Seminar - Learning Robots, Approximate Theories (Rutgers)
MACHINE LEARNING COLLOQUIUM
Learning Robots as Users and Refiners of Approximate Theories
Tom Mitchell
Rutgers University
11 AM, April 29, 1986
#423, Hill Center
This talk will describe some recent (and fairly tentative) research
toward building a learning robot. The robot is viewed as having an
approximate theory of its world, which it uses to guide problem
solving, and which is in turn refined as the robot gains experience.
The initial theory may contain fairly abstract assertions such as
"executing motor commands causes changes in the configuration of parts
of oneself", "coming into physical contact with a rigid object often
causes changes in its position", and "changes in the configuation of
physical objects correlate with changes in the visual appearance of
the object". This abstract theory is used by the robot to construct
PLAUSIBLE plans for achieving its goals. When these plans are
executed, the world provides feedback--training data which is useful
for refining the theory. This training data is generalized by a
combined explanation based/empirical method (the approximate theory is
used to contruct plausible explanations which are verified and refined
empirically). Tom Fawcett and I have recently begun implementing parts
of this system, but many open research issues remain.
------------------------------
Date: Wed 16 Apr 86 11:59:17-PST
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - A Localized Model of Concurrency (SRI)
A LOCALIZED MODEL OF CONCURRENCY
Fernando Pereira (PEREIRA@SRI-AI)
SRI International, AI Center
11:00 AM, MONDAY, April 21
SRI International, Building E, Room EJ228 (new conference room)
In this talk I will give an informal overview of a structural theory
of concurrency that I have been developing with Luis Monteiro. The
main goal of our theory is to model the way in which local
interactions between components of a system lead to global behavior.
The theory, which is based on the mathematical concept of sheaf,
allows us to model precisely the idea of processes interacting
through common behavior at shared locations. In contrast to
behavioral models, ours keeps track of the individual contributions
of subsystems to overall system behavior, allowing a finer-grained
analysis of subsystem interactions.
From event signatures that specify relations of independence and exclusivity
between events, we construct spaces of locations where activity may occur.
Behaviors are then modeled as elements of sheaves of monoids over those
spaces and processes as certain sets of behaviors. The construction of the
model, and in particular its avoidance of interleaving, gives it very
convenient mathematical properties --- sheaves of behavior monoids are to
event signatures what free monoids are to alphabets. The theory also allows
us to identify on purely structural grounds event signatures with a
potential for deadlock.
Time permitting, I will engage in rambling speculation as to possible
applications of the theory.
VISITORS: Please arrive 5 minutes before the seminar (10:55), as
you must now be escorted from the reception desk.
------------------------------
Date: Mon, 14 Apr 86 11:07:53 GMT
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@cs.ucl.ac.uk>
Subject: Seminar - Analogical Representations in Naive Physics (Edinburgh)
EDINBURGH AI SEMINARS
Date: Wednesday 16th April 1986
Time: 2.00 p.m.
Place: Department of Artificial Intelligence
Seminar Room - F10
80 South Bridge
EDINBURGH EH1 1HN.
Professor Bernard Meltzer, Joint Research Centre, Ispra Establishment, Italy w
will give a seminar entitled - ``Analogical Representations in
Modelling Naive Physics".
Ideas and experimental results will be presented on the use of
analogical representations of knowledge, in Sloman's sense, that is,
ones which bear a structural similarity to what is represented. This
was done for the qualitative modelling of the everyday behaviour of
objects and substances like strings, liquids and gases, represented by
pixel sets built up from message-passing between adjacent base
elements. These messages embody a very small number of local
constraints derived from naive observation such as material continuity
and non-copenetrability.
Based as they are on fundamental phenomenological properties of the
physical world, these programs turned out to have capacities for
solving other problems than those for which they were designed.
The use of such programs in integrated reasoning and problem-solving
systems, and the relationship of this approach to those of classical
physics and current AI ones in qualitative physics will also be
discussed.
------------------------------
Date: Wed 16 Apr 86 10:10:57-PST
From: Anne Richardson <RICHARDSON@SU-SCORE.ARPA>
Subject: Seminar - Learning Apprentice Systems (SU)
DAY: April 22, 1986
EVENT: CS 520 AI Seminar
PLACE: Terman Auditorium
TIME: 11:00
TITLE: Learning Apprentice Systems
PERSON: Tom Mitchell
FROM: Rutgers University
This talk introduces a class of knowledge-based systems called
Learning Apprentices: systems that provide interactive aid in solving
some problem and acquire new knowledge by observing the actions of
their users. The talk focuses on a particular Learning Apprentice,
called LEAP, which is presently being developed in the domain of
digital circuit design. By analyzing circuit fragments contributed by
its users, LEAP infers rules that allow it to recommend similar
circuits in subsequent cases. We discuss the type of problem solving
architecture, knowledge organization, and learning methods required to
support such learning apprentices in a variety of domains.
------------------------------
Date: 16 Apr 86 1742 PST
From: Vladimir Lifschitz <VAL@SU-AI.ARPA>
Subject: Seminar - A Theory of Diagnosis (SU)
A THEORY OF DIAGNOSIS FROM FIRST PRINCIPLES
Raymond Reiter
Department of Computer Science, University of Toronto
and
The Canadian Institute for Advanced Research
Thursday, April 17, 4pm
MJH 252
Suppose given a description of a system, together with an
observation of the system's behaviour which conflicts with the way
the system is meant to behave. The diagnostic problem is to determine
those components of the system which, when assumed to be functioning
abnormally, will explain the discrepancy between the observed and
correct system behaviour.
We propose a general theory for this problem.The theory requires
only that the system be described in a suitable logic. Moreover, there
are many such suitable logics, e.g., first order, temporal, dynamic,
etc. As a result, the theory accomodates diagnostic reasoning in a wide
variety of practical settings, including digital and analogue circuits,
medicine, and database updates. The theory leads to an algorithm for
computing all diagnoses, and to various results concerning principles
of measurement for discriminating between competing diagnoses. Finally,
the theory reveals close connections between diagnostic reasoning and
non-monotonic reasoning.
------------------------------
Date: Wed, 16 Apr 86 14:22 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - A Formal Logic for Planning (UPenn)
COLLOQUIUM
3pm Thursday, April 17, 1986
216 Moore School - University of Pennsylvania
A FORMAL LOGIC THAT SUPPORTS PLANNING WITH EXTERNAL EVENTS
AND CONCURRENT ACTIONS
Richard Pelavin - University of Rochester
A formal logic will be presented that provides a foundation for a theory of
plans in temporally rich domains. These domains include actions that occur
over intervals that may overlap in time. Thus, we can represent plans with
concurrent actions. We also can treat domains with external events, i.e.
actions by other agents and natural forces, that the planner may need to
interact with. These interactions include the prevention of an event, the
assurance of the successful completion of an event, and the performance an
action that is enabled by some external event.
The logic is an extension of a linear time logic (Allen's interval logic) with
a modal operator expressing temporal possibility and a counterfactual-like
modality that can be used to encode what can and cannot be done by the planning
agent. The semantic model consists of a set of possible worlds related by two
accessibility relations in terms of which the modalities are interpreted. The
approach of interpreting a counterfactual-like modality in terms of an
accessibility relation derives from Lewis' and Stalnaker's semantic theories of
conditionals.
------------------------------
Date: Thu, 17 Apr 86 11:09:17 CST
From: Rose M. Herring <roseh@ratliff.CS.UTEXAS.EDU>
Subject: Seminar - Editorial Comprehension in Op-Ed (UTexas)
University of Texas
Computer Sciences Department
COLLOQUIUM
SPEAKER: Sergio Alvarado
University of California at Los Angeles
TITLE: Editorial Comprehension in OpEd Through Argument Units
DATE: Tuesday, April 22, 1986
PLACE: TAY 3.144
TIME: 11 - 12 noon
OpEd (Opinions to/from the Editor) is a computer program
that reads short polito-economic editorial segments and answers
questions about their contents. For OpEd, understanding editori-
als involves: (1) applying a large amount of domain-specific
knowledge; (2) recognizing beliefs and belief relationships; (3)
following reasoning about plans and goals; (4) applying abstract
knowledge of argumentation; (5) mapping text into conceptual
representation; and (6) indexing recognized concepts for later
retrieval during question answering.
Here, I discuss OpEd's abstract knowledge of argumenta-
tion. In OpEd, knowledge of argument structure is organized by
memory structures called Argument Units (AUs). These structures
package belief support and attack relationships and reasoning
chains. When combined with domain-specific knowledge, AUs can be
used to understand and generate arguments involving plans, goals,
and beliefs. Thus, argument comprehension is viewed in OpEd fun-
damentally as the process of accessing and instantiating these
units.
A description of OpEd's architecture and examples of its
current input/output behavior are also presented.
COFFEE AT 10:30 in TAY 3.128
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Seminar - Run-Length Code for Geographical Information (SMU)
A Spatial Knowledge Structure Based on Run-Length-Code for a Geographical
Information System
Speaker: Erland Jungert
Illinois Institute of Technology
Location: 315SIC, Southern Methodist University, CS
Time: 3PM
Run-Length-Code (RLC) is an example of a simple data structure used
mainly for compacting images. A method where RLC is used as an object
oriented data structure for Geographical Information Systems (GIS) will be
presented. The usage of this object structure as a basis for spatial
reasoning while regarding the RLC-objects as part of a spatial knowledge
structure will be discussed.
------------------------------
Date: Sun, 20 Apr 86 14:45:46 pst
From: grover@aids-unix (Mark Grover)
Subject: Conference - Symbolics User Group National Symposium
Registration materials are now available to the National Symposium of the
Symbolics Lisp Users Group, to be held June 2-6, 1986 at Georgetown University
in Washington, DC. Both tutorials and technical sessions will be held. The
theme of this year's Symposium is "Programming in Style". Many interesting
and exciting guests are expected. Registration materials and housing
information can be obtained via telephone or US mail to:
Symbolics National Symposium
Attn: Annmarie Pittman
655 15th St. NW #300
Washington, DC 20005
(202) 639-4228
------------------------------
Date: Thu, 17 Apr 86 08:40:33 -0500
From: sriram@ATHENA.MIT.EDU
Subject: Conference - Workshop on Engineering Design
AAAI-86 WORKSHOP ON
KNOWLEDGE-BASED EXPERT SYSTEMS FOR ENGINEERING DESIGN
In the 60's, AI researchers explored weak methods applicable to a very
broad class of problems. In the 70's, we created knowledge-intensive,
"strong" methods for solving quite specific types of problems. A major
trend in the 80's is to identify coherent problem classes of
intermediate generality; the proof of coherence is in the further
identification of correspondingly general problem-solving methods. For
instance, "classification problems" have been defined, a general but
still knowledge-based classification problem-solving process and
system architecture have been laid out, and tools exist for
facilitating development of classification systems.
Design problems also appear to constitute a coherent problem class. At
present, however, we are only beginning the enterprise of: defining
this class; formalizing a model of the design problem-solving process
and design system architecture; and creating tools for developing
design systems.
To model (and ultimately facilitate) human designers and their
enormous flexibility in terms of conventional AI "primitives" requires
integrating such diverse functions as refinement techniques,
constraint reasoning, and goal satisfaction, and encoding these
functions in such varied forms as rules, heuristics, and algorithms.
Viewing design tools as "knowledge-based expert systems" provides a
framework for capturing such diversity.
The purpose of this workshop is to provide a forum in which both
engineers and computer scientists can discuss knowledge-based
frameworks for organizing and developing useful engineering design
systems.
TOPICS TO BE DISCUSSED
1. Definition of "a design problem".
2. A general model of the design process.
3. Knowledge representation formalisms for design.
4. Problem-solving strategies required for design.
5. Existing frameworks and for design.
6. Existing architectures for design aids.
7. Capabilities and tools desired by engineers.
8. Automated vs.interactive design aids.
9. Software environments and tools for developing design aids.
ORGANIZERS
Sriram [sriram@athena.mit.edu] and Chris Tong [tong@red.rutgers.edu]
PARTICIPATION
The workshop will take place on Monday, August 11, at the University
of Pennsylvania. Participation in the workshop is by invitation,
limited to 35 participants. Those wishing to be invited should submit
four copies of a 1000-word abstract describing their work in AI and
engineering design to Sriram, 1-253b, Dept. of Civil Engineering,
M. I.T., Cambridge, MA 02139 OR to Chris Tong, Dept. of Computer
Science, Rutgers University, New Brunswick, NJ 08903. The deadline
for application is May 30, 1986. Invitations will be issued by July 1.
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Conferences - ACM SIGMOD & Design Automation & Computers and Math
1986 ACM SIGMOD international Conference on the management of data
May 28-30 1986 Washington DC
Session 6a Logic and Databases 11:00 - 12:30 Thursday May 30
A. Van Gelder "A Message Passing Framework for Logical Query Evaluation"
A. Rosenthal, S. Heller, U. Dayal, F. Manola "Traversal Recursion:
A Practical Approach to Supporting Recursive Applications:
G. Gardarin C. DeMaindreville "Evaluation of Databse Recursive Logic
as Recurrent Function Series"
Session 7 Query Processing Thursday May 30 2:00 - 3:30
J. C. Freytag "Rule Based Transformation of Relational Queries into
Interactive Programs"
Session 9a Rule Based Systems
M. T. Harandi T. Schang S. Cohen "Rule Base Management Using Meta
Knowledge"
T. Imelinksi "Query Processing in Deductive Databases with Incomplete
Information"
Q. Chu "A Rule-Based Object/Task Modelling Approach"
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Twenty Third ACM/IEEE Design Automation Conference
June 29-July 2, 1986 Las Vegas, Nevada
Session 4 Intelligent Systems Time Monday 10:30 - 12:00
An Expert System Paradigm for Design
Forrest D. Brewer, Daniel D. Gajaski
University of Illinois at Urbana
Session 12 Timing Verification Monday Monday 4:00 - 5:30
Reasoning About Digital Systems Using Temporal Logic
G. Venkatesh
Session 14 Test Generation Techniques Tuesday 8:00 - 10:00
A Heuristic Chip-Level Test Generation Algorithm
Daniel S. Barclay, James R. Armstrong
Virginia Polytechnic Institute
Session 29 Hardware Design Languages Tuesday 3:30 - 5:30
A Design Rule Database System to Support Technology Adaptable Applications
Hilary J. Kahn, J. S. Aude
University of Manchester
Session 34 Expert Systems for Design Automation Wednesday 8:00 - 10:00
A Rule-Based Logic Circuit Synthesis System for CMOS Gate Arrays
Takao Saito, Hiroyuki Sugimoto, Masami Yamazaki, NObuaki Kawato
Fujitsu Labs
FLUTE - A Floorplanning Agent for Full Custom VLSI Design
Hityuki Wantanabe, Bryan Ackland
AT&T Bell Laboratories, Holmdel NJ
Knowledge-Based Optimal Ill Circuit Generator From Conventional Logic
Descriptions
T. Watanabe, T. Masuishi, T. Nishiyama, N. Horie
Hitachi
PEARL: An Expert System for Power Supply Layout
Ed DeJesus
DEC
Session 38 Short Papers: Representing and Manipulating VLSI Design
Wednesday 10:30 - 12:00
Precedent Based Reasoning About VLSI Structures
Richard H. Lasthrop, Robert S. Kirk MIT and Gould AMI respectively
A Frame Based System for Representing Knowledge About VLSI Design
Hassan K. Reghbati, W. Stephen ADolph, Amar Sanmugasundam
Simon Fraser University
Session 39 Timing Verification Wednesday 10:30 - 12:00
A Rule Based Approach to Unifying Functional and Fault Simulation and
Timing Verification
Sujmit Ghosh
AT & T
Session 42 Database II Wednesday 1:30 - 3:30
Rules-Based Object Clustering: A Data Structure for Symbolic VLSI
Synthesis and Analysis
Robert P. Larsen Rockwell International Corporation
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A Conference on Computers and Mathematics July 30 - August 1
Stanford University
Woodrow Bledsoe
Automated Theorem Proving and Artificial Intelligence
Rudiger Loos
Tarski's Dream
------------------------------
End of AIList Digest
********************
∂22-Apr-86 0111 LAWS@SRI-AI.ARPA AIList Digest V4 #96
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 Apr 86 01:11:12 PST
Date: Mon 21 Apr 1986 23:02-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #96
To: AIList@SRI-AI
AIList Digest Tuesday, 22 Apr 1986 Volume 4 : Issue 96
Today's Topics:
Queries - Technical Report Sources & FRL and HEARSAY-III &
Lisp Machine Information & AI Conference in Long Beach &
National Youth Science Camp Alums & Battleware,
Projects - EDISON Project & Non-DoD Funding,
Techniques - String Reduction,
Publications - Journal Prices,
Applications - Compuscan Page Reader
----------------------------------------------------------------------
Date: Fri 18 Apr 86 17:17:43-PST
From: Daniel Davison <DAVISON@SUMEX-AIM.ARPA>
Subject: How do I get technical reports?
There were several technical reports mentioned in a recent AIlist that I'd
like to get...but I don't know how. Would some kind soul send me a note
about how to get tech reports from (1) LSU and (2) CMU?
Thanks,
dan davison
davison@sumex-aim.arpa
------------------------------
Date: 24 Apr 86 21:35:30 GMT
From: ihnp4!houxm!whuxl!whuxlm!akgua!gatech!seismo!mcvax!euroies!rreilly
@ucbvax.berkeley.edu
Subject: FRL and HEARSAY-III
I have two queries:
(1) Does anybody have any information on public domain
implementations of Goldstein's FRL or any frame system in
Lisp?
(2) I believe that CMU make available an empty version of HEARSAY
(HEARSAY-III I think). Does aybody have any details on this?
Has anybody used the system?
Thanks in advance.
--
...mcvax!euroies!rreilly (Ronan Reilly)
Educational Research Centre, St Patrick's College
Dublin 9, Ireland.
------------------------------
Date: Mon, 21 Apr 1986 12:23:33 EST
From: WALLFESH%UCONNVM.BITNET@WISCVM.WISC.EDU
Subject: Lisp Machine information sought
Can anyone suggest any papers on Lisp machines, particularly
those which stress their architectural aspects? I'm attempting
to write a paper on Lisp machines for my computer architecture
class, but I cannot seem to find many references.
Thanks,
Sande Wallfesh (wallfesh@uconnvm.bitnet, wallfesh@carcvax.csnet)
CS Dept. Box U-157
University of Connecticut
Storrs, CT 06268
------------------------------
Date: Fri, 18 Apr 86 14:14:17 cst
From: Girish Kumthekar <kumthek%lsu.csnet@CSNET-RELAY.ARPA>
Subject: AI Conference in Long Beach
I am trying to find information about an AI conference in Long Beach, CA to
be held by the end of this month (April).
I looked in recent issues of AI magazine but could not get info.
If you have any info as to exact dates,
whom to contact, tel # or address etc, I would appreciate if you could mail
it to me at
kumthek%lsu@csnet-relay.csnet
Thanks in advance
Girish kumthekar (504)-388-1495
------------------------------
Date: Sun 20 Apr 86 11:23:59-PST
From: Lee Altenberg <ALTENBERG@SUMEX-AIM.ARPA>
Subject: NATIONAL YOUTH SCIENCE CAMP ALUMS
I am an alum of the National Youth Science
Camp (California 1975), which was a great boost for me going into
science, and I was wondering how many other NYSC alums are involved in
AI . NYSC Alums please send me your name , and the state and year you
represented, and also what you're doing now if you want to. Thanks.
-Lee Altenberg
------------------------------
Date: Fri, 18 Apr 86 11:42 EST
From: JOHNSON%northeastern.csnet@CSNET-RELAY.ARPA
Subject: Rampant misunderstanding, speculation and questions.
I've been reading the April CACM (my first issue). One of the
sections seems to be a debate over battle management software. It seems
that nobody is sure whether or not battleware would do the "right" (if
there is a right) thing in a real battle. Also, no one is sure if it
ever can be proven one way or the other.
I'm not a military historian. What little I do know tells me that
one doesn't necessarily win a battle with logic or one set of rules.
History seems to say that you start losing wars if the other side
doesn't play by the rules or invents their own.
I've been seeing shorts in the ailist of late dealing with inexact
logic. I don't claim to understand it. I'm not at all certain,
however, that I'd want a piece of ai battleware defending me even if it
did have some kind of rudimentary understanding of non-binary logic. In an
impossible situation where the other side doesn't play by the rules,
battleware might decide to give up or pull the doomsday switch.
I guess I have several questions here. How inexact is inexact? Can
battleware but made to recognize that new rules exist and adapt to them
in real-time? Can battleware be made to fully understand constraints?
If my cat has fleas and fire kills fleas, burning the cat will eliminate
the fleas. Problems are usually easier to solve without constraints.
I'd rather battleware (or any ai program) didn't think like this. I
know a little about how rules can be made to work in Prolog. I'm not
sure how one would go about defining rules for a battle. Can there really
be inexact (fuzzy) rules? Any references on this?
A side issue is this, can battleware be made to run in real-time
at all? One of the ideas of having it is because humans can't
assimilate all necessary information in time to act on it in a highly
electronic war. I don't think this is just an issue of having 10 cray's
doing the job.
I don't know. Maybe 20,000 toasters running in parallel would be
just as good as two or three crays running battleware. Maybe I just
don't understand the problem. (I'm new to ai obviously.) Can we please
define or describe some things in words of one syllable or less? (yes I
know graph is one syllable.) Besides defining inexact, can anyone point
me to a GOOD beginner something or other on ai. I've seen several and
they're all not very good yet (or I might be missing the point).
Chris Johnson
Northeastern University
johnson@northeastern.csnet
------------------------------
Date: Fri, 18 Apr 86 11:34:41 PST
From: Scott Turner <srt@LOCUS.UCLA.EDU>
Subject: Re: EDISON Project
Before I get innudated with requests for the Edison report, all UCLA
Technical Reports can be ordered through:
Brenda Ramsey
UCLA Dept. of Computer Science
3713 Boelter Hall
801 Hilgard Ave.
Los Angeles, CA 90024
ramsey@ucla-cs.arpa
(213) 825-2778
-- Scott
------------------------------
Date: Fri 18 Apr 86 17:18:30-PST
From: Daniel Davison <DAVISON@SUMEX-AIM.ARPA>
Subject: Summary of non-DoD funding for AI
I recently asked about non-DOD sources of funding for AI projects. I received
three replies. Two cited NSF and one said NIH (but not which institute[s])
fund AI work. Of course, NIH funds this facility (SUMEX); apparently smaller
grants are funded also.
dan davison
davison@sumex-aim.arpa
------------------------------
Date: 18 Apr 86 16:59:06 GMT
From: ihnp4!stolaf!mmm!umn-cs!amit@ucbvax.berkeley.edu (Neta Amit)
Subject: Re: String reduction
In article <1031@eagle.ukc.ac.uk> sjl@ukc.ac.uk (S.J.Leviseur) writes:
>Does anybody have any references to articles on string reduction
>as a reduction technique for applicative languages (or anything
>else)? They seem to be almost impossible to find! Anything welcome.
String reduction as a model of computation was suggested by
A.A. Markov, in his 1954(?) paper, and is proved to be equivalent in
power to the other two general models of computation (Turing machine and
the Lambda Calculus).
Markov algorithm consists of an input string S and a program P, which
is a sequence of BNF-like productions LHS --> RHS. The evaluator scans
P top to bottom (that's sequencing), looking for a match between a
substring of S and the LHS of a production (that's conditionals). When
a match is found, the RHS replaces the matching substring in S, and
the scanning is restarted from the top of P (that's looping). Let's
not consider termination and error conditions. As stated here, this
isn't a purely applicative model, but there is no inherent reason why
the new S couldn't in fact be new!
Michael Barnett of Brooklyn College, CUNY, (a Chemist turned Computer
Scientist) has recently suggested (See Sigplan Notices in the last 12
months) that it may be possible to synthesize molecules that will do
string substitution (a biological computer) and that this might be a
good model to describe the functionality of the human brain.
If I understand correctly, you are looking for an applicative model,
in which functions cause string-substitution instead of returning
values. Notice that this is the mechanism used by parametrized macro
expansion (so you can easily simulate an applicative string reduction
machine in Pure Lisp, using macros alone.)
A guy named Karl Fant, from Honeywell Research (in Minneapolis), has
been developing an applicative string-reduction model, but I don't
think he has published in a publicly available journal.
Anyway, I would be interested in expanding this discussion.
Cheers,
--Neta CSNET: amit@umn-cs
ARPA: amit%umn-cs@csnet-relay.ARPA
UUCP: ...ihnp4!umn-cs!amit
------------------------------
Date: 19 Apr 86 15:35:22 GMT
From: ihnp4!stolaf!mmm!srcsip!meier@ucbvax.berkeley.edu (Christopher Meier)
Subject: Re: String reduction
In article <994@umn-cs.UUCP> amit@umn-cs.UUCP (Neta Amit) writes:
>A guy named Karl Fant, from Honeywell Research (in Minneapolis), has
>been developing an applicative string-reduction model, but I don't
>think he has published in a publicly available journal.
>
Karl can be reached at ihnp4!srcsip!fant (or through any path I can...).
--
Christopher Meier MN65-2300 {osu-eddie,okstate,bthpyd}\
S&RC Signal and Image Processing {ihnp4,philabs,,gnutemp}\
3660 Technology Drive (612) {hyper,umn-cs,mmm,meccts}!srcsip!meier
Mpls, MN 55413 782-7191
------------------------------
Date: Mon, 21 Apr 86 07:15:35 -0500
From: sriram@ATHENA.MIT.EDU
Subject: Journal Prices
In a recent message on the AI bboard, some of you raised concerns
about the subscription price for the AI IN ENGG. JOURNAL. As a
co-editor of the journal, I forwarded the message to Mr. Lance
Sucharov, the publisher director of the journal. His reply follows.
Sriram
Dear Friends:
I have just received a copy of the message "Journal prices hit the
moon!" on the AI list bulletin board and I feel this needs an answer
and is an excellent opportunity to put the publishing record straight.
The journal referred to is our new launch the "INTERNATIONAL JOURNAL
FOR AI IN ENGINEERING", which has aroused a good deal of interest
internationally and in this one case, controversy over the price!
First of all, may I say I sympathise with the writer: in wishing to
keep prices down as much as possible and thereby giving the widest
audience to published information. However, although the writer
mentions the authors, reviewers and editorial board members who give
their services freely, he makes no mention of others who do not:
typesetters, printers, agents, editorial staff, designers, editors,
and so on.
How significant are these costs? Significant enough for some journals
to close. Many others of important academic interest, financially
only limp along and as publishers, would be better off putting their
money in a bank. At the same time some publishers are extremely
profitable: a well-known name and powerful marketing arm all help. But
it is by having some profitable journals that more marginal ones can
be popped up, and new launches can be afforded, which can take several
years to pay their way. Furthermore, for overseas publishers the US
market can be expensive because of postage, uncertainty in dollar
movements and banking costs. Other publishres have pitched their
prices higher than ours, and I can mention the "IMA JOURNAL OF NUMERAL
ANALYSIS" ($172), "COMPUTER SYSTEMS SCIENCE AND ENGINEERING" ($166),
"IMAGE AND VISION COMPUTING" ($171), and there are many more.
Finally, I am delighted that authors and the editorial team do provide
a service just as we provide a service in disseminating their work
world-wide.
Sincerely yours,
Lance Sucharov
------------------------------
Date: 18 Apr 1986 13:16:39-EST
From: kushnier@NADC
Subject: Compuscan
COMPUSCAN
I had an opportunity to see a demonstration of the CompuScan Model 230
Page reader. This is a desk top device which looks like a small copier,
and using optical character recognition, can read a page of text and
send it as an ASCII file to a waiting IBM-PC or other micro over an RS-232
port. The unit sells for about $6K.
I had provided several different font sets on several qualities of paper.
Also, I used originals and text that had been Xeroxed several times.
As for the results...
If I put on my R&D AI hat, then the results were both exciting and thought
provoking. I was amazed at the number of characters that the machine was
able to successfully recognize- especially for a machine in that price
category. Although it usually took longer than the advertised 30 seconds
per page, it was tolerable.
The results were inconsistent. If you put the same page in twice, the results,
both errors and correct characters, would come out different each time. Now,
this is not to say that the machine was "screwing up". It WAS, in each case
following a set of rules based on what it perceived to be a specific
character. It was going through a set of probabilities and percentages
and based on the result, printed a particular character. Being a
deterministic programmer, this at first rubbed me the wrong way. After
all, we are dealing with a computer here..No matter how many times you
put in 2+3, it should always equal 5. Not so with AI type solutions.
Although this inconsistency should be minimized in the design, the AI
programmer must recognize the possibility of it occurrence.
If I put on my Office Manager's hat, then I would say that COMPUSCAN is
not quite ready to come into everyday service. It made too many mistakes
to provide efficient page-to-text translation. This was especially true when
the quality of the documents and the font type varied. One page had been
slightly cocked when Xeroxed. This caused havoc to the optical recognition
software.
Compuscan is promising a new generation Model 240, to be out shortly. I
am interested in seeing what improvements are made.
For INFO write to: Compuscan Inc.
81 Two Bridges Rd./Bldg.2
Fairfield, N.J. 07006
TEL: (201) 575-0500
This review is a personal opinion and does not reflect any official view of the
government or any one else in the world. - Ron Kushnier
Ron Kushnier
kushnier@nadc.arpa
------------------------------
End of AIList Digest
********************
∂22-Apr-86 0324 LAWS@SRI-AI.ARPA AIList Digest V4 #97
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 Apr 86 03:24:46 PST
Date: Mon 21 Apr 1986 23:12-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #97
To: AIList@SRI-AI
AIList Digest Tuesday, 22 Apr 1986 Volume 4 : Issue 97
Today's Topics:
Bibliography - Recent Articles #7
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Articles #7
%A H. Bernstein
%T Determining the Shape of a Convex n-sided Polygon by Using 2n+k Tactile
Probes
%R 125 R29
%D JUN 1984
%I New York University, Courant Institute, Department of Computer
Sciences
%K AI07 AI06
%A A. Tuzhilin
%A P. Spirakis
%T A Semantic Approach to Correctness of Concurrent Executions
%R 130
%D JUL 1984
%I New York University, Courant Institute, Department of Computer
Sciences
%K AA08
%A E. Davis
%T Shape and Function of Solid objects: Some Examples
%R 137
%D OCT 1984
%I New York University, Courant Institute, Department of Computer
Sciences
%A C. O'Dunlaing
%A M. Sharir
%A C. Yap
%T Generalized Vornoi Diagrams for Moving a Ladder: I Topological
Analysis
%R 139 R32
%D NOV 1984
%I New York University, Courant Institute, Department of Computer
Sciences
%A C. O Dunlaing
%A M. Sharir
%A C. Yap
%T Generalized Vornoi Diagrams for Moving a Ladder: II Efficient
Construction of the Diagram
%D NOV 1984
%R 140 R33
%I New York University, Courant Institute, Department of Computer
Sciences
%A M. Bastuscheck
%T Look Up Table Computation for A Ratio Image Depth Sensor
%R 141 R34
%D NOV 1984
%I New York University, Courant Institute, Department of Computer
Sciences
%A J. Schwartz
%A M. Sharir
%A A. Siegel
%T An Efficient Algorithm for Finding Connected Components of a Binary
Image
%R 154 R38
%D FEB 1985
%I New York University, Courant Institute, Department of Computer
Sciences
%K AI06
%A D. Cantone
%A A. Ferro
%A J. Schwartz
%T Decision Procedures for Elementary Sublanguages of Set Theory VI.
Multi-Level Syllogistic Extended by the Power Set Operator
%R 156
%D FEB 1985
%I New York University, Courant Institute, Department of Computer
Sciences
%K AI11
%A E. Kishon
%A X. D. Yang
%T A Video Camera Interface for High Speed Region Boundary Locations
%R 157 R40
%D FEB 1985
%I New York University, Courant Institute, Department of Computer
Sciences
%K AI06
%R 13
%A N. V. Findler
%A H. Klein
%A W. Gould
%A A. Kowal
%A J. Menig
%T (1) Studies on decision making using the game of poker;
(2) Computer experiments on the formation and optimization
of heuristic rules
%I Suny Buffalo Computer Science
%R 14
%A N. V. Findler
%A D. Chen
%T On the problems of time, retrieval of temporal
relations, causality and co-existence
%I Suny Buffalo Computer Science
%R 15
%A G. T. Herman
%A J. A. Jackowski
%T A decision procedure using discrete geometry
%I Suny Buffalo Computer Science
%K AI14
%R 20
%A N. V. Findler
%T Short note on a heuristic search strategy
%I Suny Buffalo Computer Science
%K AI03
%A N. V. Findler
%T Heuristic programmers and their gambling machines
%A H. Klein
%A A. Kowal
%A Z. Levine
%A J. Menig
%I SUNY Buffalo Computer Science
%K AI03
%R 72
%A G. T. Herman
%T A decision procedure using the geometry of convex sets
%A P. W. Aitchison
%I SUNY Buffalo Computer Science
%K AI14
%R 84
%A G. T. Herman
%A A. V. Lakshminarayanan
%A S. W. Rowland
%T The reconstruction of objects from shadowgraphs
with high contrasts
%D August 1974
%I SUNY Buffalo Computer Science
%K AI06
%R 91
%A G. T. Herman
%A A. Lent
%T A computer implementation of a Bayesian analysis of image
reconstruction
%D January 1975
%I SUNY Buffalo Computer Science
%K AI06
%R 92
%A A. V. Lakshminarayanan
%T Reconstruction from divergent ray data
%D January 1975
%I SUNY Buffalo Computer Science
%K AI06
%R 93
%T Iterative relaxation methods for image reconstruction
%A G. T. Herman
%A A. Lent
%A P. H. Lutz
%D July 1975
%I SUNY Buffalo Computer Science
%K AI06
%R 99
%A N. V. Findler
%T Studies in machine cognition using the game of Poker
%D June 1975
%I SUNY Buffalo Computer Science
%K AA17
%X A progress report is presented of our on-going research efforts
concerning human decision making under uncertainty and risk,
human problem solving and learning processes, on one hand,
and machine learning, large scale programming systems
and novel programming techniques, on the other.
%R 103
%A G. T. Herman
%T Quadratic optimization for image reconstruction, Part I
%A A. Lent
%I SUNY Buffalo Computer Science
%K AI06
%R 104
%A P. H. Lutz
%T Fourier image reconstruction incorporating
three simple interpolation techniques
%I SUNY Buffalo Computer Science
%K AI06
%R 110
%A T. L. Roy
%T A contribution to the Poker Project: The
development of and experience with a
Statistically Fair Player
%D May 1976
%I SUNY Buffalo Computer Science
%K AA17
%X This paper is a report on my efforts over the past
several months, in the development of a Player Function
for the Poker System, called the Statistically Fair Player.
%R 111
%A J. N. Shaw
%T Multi-Pierre, a learning robot system
%D May 1976
%I SUNY Buffalo Computer Science
%K AI07
%X The goal of this project is to simulate several robots
under partial human control,
and operating in a lifelike'' environment.
The robots have an overall goal of survival and an instinct''
to explore their environment.
The project is an extension of an existing system
which has a single organism functioning in a similar environment.
The environment consists of a flat terrain,
populated with three-dimensional objects of varying types,
sizes and shapes.
%R 115
%A T. W. Chen
%A N. V. Findler
%T Toward analogical reasoning in problem
solving by computers
%D December 1976
%K AA17
%I SUNY Buffalo Computer Science
%X We attempt in the present paper
to investigate Analogical
Reasoning (AR) detached from specific tasks and to formulate
its general principles so that it may become a component of
problem solving programs as much as the means-ends analysis
has been shown to be one in the literature on GPS.
%R 119
%A S. C. Shapiro
%T A Scrabble crossword game playing program
%I SUNY Buffalo Computer Science
%K AA17
%R 127
%A J. K. Cipolaro
%A N. V. Findler
%T MARSHA, the daughter of ELIZA \- a simple
program for information retrieval in
natural language
%I SUNY Buffalo Computer Science
%K AA02 AA14
%R 130
%T SNARK77: A programming system for the reconstruction
of pictures from projections
%A G. T. Herman
%A S. W. Rowland
%D January 1978
%I SUNY Buffalo Computer Science
%K AI06
%R 134
%T On the Bayesian approach to image reconstruction
%A G. T. Herman
%A H. Hurwitz
%A A. Lent
%A H. P. Lung
%D June 1978
%I SUNY Buffalo Computer Science
%K AI06
%R 141
%A N. V. Findler
%T A heuristic information retrievalsystem based on
associative networks
%D February 1978
%I SUNY Buffalo Computer Science
%K AI12 AA14
%R 145
%A E. Artzy
%T Boundary detection of internal organs in mini-computers
%I SUNY Buffalo Computer Science
%K AI06 AA01
%R 147
%A S. N. Srihari
%T On choosing measurements for invariant pattern recognition
%D September 1978
%I SUNY Buffalo Computer Science
%K AI06
%R 152
%A J. Case
%A S. Ngo\ Manuelle
%T Refinements of inductive inference
by Popperian machines
%I SUNY Buffalo Computer Science
%K AI04
%R 154
%A J. Case
%A C. H. Smith
%T Comparison of identification criteria
for mechanized inductive inference
%I SUNY Buffalo Computer Science
%K AI04
%R 155
%A C. H. Smith
%T Finite covers of inductive inference machines
%I SUNY Buffalo Computer Science
%K AI04
%R 164
%A D. P. McKay
%A S. C. Shapiro
%T MULTI \- a Lisp based multiprocessing system
%I SUNY Buffalo Computer Science
%K H03 T01
%R 169
%A E. M. Gurari
%A H. Wechsler
%T On the difficulties involved in the
segmentation of pictures
%I SUNY Buffalo Computer Science
%K AI0
%R 170
%A M. M. Yau
%A S. N. Srihari
%T Recursive generation of hierarchical
data structures for multidimensional digital images
%I SUNY Buffalo Computer Science
%K AI06
%R 171
%A A. S. Maida
%A S. C. Shapiro
%T Intensional concepts in propositional semantic networks
%I SUNY Buffalo Computer Science
%R 172
%A S. N. Srihari
%A J. J. Hull
%A R. Bo\o'z\(hc'inovi\o'c\(aa'
%T Representation of contextual knowledge
in word recognition
%I SUNY Buffalo Computer Science
%K AI02
%R 173
%A S. C. Shapiro
%T COCCI: a deductive semantic network
program for solving microbiology unknowns
%I SUNY Buffalo Computer Science
%K AA10
%R 174
%A J. E. S. P. Martins
%A D. P. McKay
%A S. C. Shapiro
%T Bi-directional inference
%I SUNY Buffalo Computer Science
%R 175
%A J. E. S. P. Martins
%A S. C. Shapiro
%T A belief revision system based on relevance
logic and heterarchical contexts
%I SUNY Buffalo Computer Science
%R 177
%A S. N. Srihari
%A M. E. Jernigan
%T Pattern recognition
%I SUNY Buffalo Computer Science
%K AI06
%R 178
%A K. J. Chen
%T Tradeoffs in machine inductive inference
%I SUNY Buffalo Computer Science
%K AI04
%R 179
%A J. G. Neal
%T A knowledge engineering approach to natural language understanding
%I SUNY Buffalo Computer Science
%K AI02
%R 183
%A R. K. Srihari
%T Combining path-based and node-based inference in SNePS
%I SUNY Buffalo Computer Science
%R 184
%A S. N. Srihari
%A J. J. Hull
%T Experiments in text recognition with binary
\fIn\fP-gram and Viterbi algorithms
%I SUNY Buffalo Computer Science
%R 186
%A H. Shubin
%T Inference and control in multiprocessing environments
%I SUNY Buffalo Computer Science
%K H03
%R 187
%A N. V. Findler
%T A preliminary report on a multi-level learning technique
using production systems
%I SUNY Buffalo Computer Science
%K AI04 AI01
%R 188
%A N. V. Findler
%A E. J. M. Morgado
%T Morph-fitting \- an effective technique of approximation
%I SUNY Buffalo Computer Science
%K AI02
%R 189
%A N. V. Findler
%A N. M. Mazur
%A B. B. McCall
%T A note on computing the asymptotic form of
a limited sequence of decision trees
%I SUNY Buffalo Computer Science
%R 190
%A N. V. Findler
%A J. E. Brown
%A R. Lo
%A H. Y. You
%T A module to estimate numerical values of
hidden variables for expert systems
%I SUNY Buffalo Computer Science
%K AI01
%R 192
%A S. N. Srihari
%A J. J. Hull
%A R. Choudhari
%T An algorithm for integrating
diverse knowledge sources in text recognition
%I SUNY Buffalo Computer Science
%K AI06
%R 193
%A G. L. Sicherman
%T The Advice-Taker/Inquirer
%I SUNY Buffalo Computer Science
%R 194
%A N. V. Findler
%T Toward a theory of strategies
%I SUNY Buffalo Computer Science
%R 195
%A S. Moriya
%T An algebraic structure theory
of rule sets, I: a formalization
of both production systems and decision tables
%I SUNY Buffalo Computer Science
%R 196
%A N. V. Findler
%T An overview of the Quasi-Optimizer system
%I SUNY Buffalo Computer Science
%R 197
%A N. V. Findler
%A G. L. Sicherman
%A B. B. McCall
%T A multi-strategy gaming environment
%I SUNY Buffalo Computer Science
%R 198
%A L. M. Tranchell
%T A SNePS implementation of KL-ONE
%I SUNY Buffalo Computer Science
%R 199
%A M. M. Yau
%T Generating quadtrees of cross-sections from octrees
%I SUNY Buffalo Computer Science
%R 202
%A G. L. Sicherman
%T Parsley 1.1: A general text parser in LISP
%D April 1983
%I SUNY Buffalo Computer Science
%X T01 AI02
%R 203
%A J. E. S. P. Martins
%D May 1983
%T Reasoning in multiple belief spaces
%I SUNY Buffalo Computer Science
%R 204
%A J. T. Nutter
%D October 1983
%T Default reasoning in A.I. systems
%I SUNY Buffalo Computer Science
%R 206
%A P. F. Kung
%A S. L. Hardt
%T Understanding `Circuit Stories;' or,
Using Micro PAM to explain VLSI systems
%D December 1983
%I SUNY Buffalo Computer Science
%K AA04
%R 207
%T Grinlib \- Grinnell graphics in Lisp
%A P. Schlossman
%A S. L. Hardt
%D 1983
%K T01
%R 208
%T Correcting and translating ill-formed ship messages
%A J. Rosenberg
%A M. E. Haefner
%A S. L. Hardt
%D January 1984
%I SUNY Buffalo Computer Science
%R 209
%T A step towards a friendly psychiatric diagnosis tool
%A P. Schlossman
%A G. K. Phillips
%A S. L. Hardt
%D April 1984
%I SUNY Buffalo Computer Science
%K AA01
%R 210
%T Developing a knowledge-based psychiatric
diagnostic tool: The investigation of opportunistic processing
%A M. E. Haefner
%A S. L. Hardt
%D February 1984
%I SUNY Buffalo Computer Science
%K AA01
%R 211
%T Naive physics and the physics of diffusion; or, When intuition fails
%A S. L. Hardt
%D June 1984
%I SUNY Buffalo Computer Science
%X AA16
%R 212
%T From CD to mandarin Chinese: The language generation project
%A M. Y. Lo
%A S. L. Hardt
%D August 1984
%I SUNY Buffalo Computer Science
%X The investigation reported here is centered on
the development of the Chinese language generator, SINO-MUMBLES.
This natural language generator takes as input a CD expression
and expresses its meaning in Mandarin Chinese.
The program is based on the English generator, MICRO-MUMBLE
and on an earlier version of the Chinese generator developed
in our project.
%R 213
%T Knowledge based parsing
%A J. G. Neal
%A S. C. Shapiro
%D May 1984
%I SUNY Buffalo Computer Science
%K AI02
%X An extremely significant feature of any Natural Language (NL)
is that it is its own meta-language.
One can use a NL to talk about the NL itself.
One can use a NL to tutor a non-native speaker, or other poor
language user, in the use of the same NL.
We have been exploring methods of knowledge
representation and NL Understanding (NLU) which would allow an
Artificial Intelligence (AI) system to play the role of
poor language user in this setting.
The AI system would have to understand NL utterances about how
the NL is used, and improve its NLU abilities according to this
instruction.
It would be an NLU system for which the domain being discusses
in NL is the NL itself.
%R 214
%T Optical character recognition
techniques in mail sorting: A review of algorithms
%A J. J. Hull
%A G. Krishnan
%A P. W. Palumbo
%A S. N. Srihari
%D June 1984
%I SUNY Buffalo Computer Science
%K AI06
%X A study of Optical Character Recognition
(OCR) techniques employed in automatic mail sorting equipment
is presented.
Methods and algorithms for image preprocessing,
character recognition, and contextual postprocessing
are discussed and compared.
The objective of this study is to provide a background
in the state-of-the-art of this equipment
as the first element in a search for techniques
to significantly improve the capabilites of postal address recognition.
%R 215
%T Belief representation and quasi-indicators
%A W. J. Rapaport
%D August 1984
%I SUNY Buffalo Computer Science
%K AI02
%X This thesis is a study in knowledge'' representation,
specifically, how to represent beliefs expressed by
sentences containing quasi-indicators.
An \fIindicator\fP is a personal or demonstrative pronoun
or adverb used to make a strictly demonstrative reference.
A \fIquasi-indicator\fP is an expression that occurs within
an intentional context and that represents a use of an indicator
by another speaker.
E.g., if John says, I am rich,'' then if \fIwe\fP say,
John believes that he himself is rich,'' our use of `he himself'
is quasi-indexical.
Quasi-indicators pose problems for natural-language
question-answering systems, since they cannot be
replaced by any co-referential noun phrases without changing
the meaning of the embedding sentence.
Therefore, the referent of the quasi-indicator must be represented
in such a way that no ivnalid co-referential claims are entailed.
%R 216
%T Searle's experiments with thought
%A W. J. Rapaport
%D November 1984
%I SUNY Buffalo Computer Science
%X A critique of several recent objections to John Searle's
Chinese Room argument against the possibility of strong AI
is presented.
The objections are found to miss the point,
and a stronger argument against Searle is presented,
based on a distinction between syntactic and semantic
understanding.
%R 217
%T Review of Lambert's \fIMeinong and the
Principle of Independence\fP
%A W. J. Rapaport
%D November 1984
%I SUNY Buffalo Computer Science
%K AI08
%X This is a critical study of Karel Lambert's
\fIMeinong and the Principle of Independence.\fP
Alexius Meinong was a turn-of-the-century philosopher
and psychologist who played a role in the early development
of analytic philosophy, phenomenology, and Gestalt psychology.
His theory of objects has become of increasing relevance
to intensionally-based semantics and, hence, ought to be
of interest to AI researchers in the field of knowledge
representation.
Lambert's book explores the relevance of Meinong's theory
to free logics.
%R 85-01
%T Recognition of off-line cursive handwriting:
A case of multi-level machine perception
%A R. M. Bo\o'z\(hc'inovi\o'c\(aa'
%D March 1985
%I SUNY Buffalo Computer Science
%K AI06
%X Cursive script recognition by computer (CSR)
is the problem of transforming language from
the form of cursive human handwriting to one of digital
text representation.
Off-line CSR involves elements of computer vision
at a low level of processing
andk those of language perception and understanding at
a higher level.
The problem is approached in this work
as a multi-level machine perception problem
in which an image of a cursive script word is transformed
through a hierarchy of representation levels.
Four distinct levels are employed,
based on descriptions that use pixels, chain codes, features
and letters, before the final
word level of representation is obtained.
%R 85-05
%A P. B. Van\ Verth
%T A system for automatic program grading
%D May 1985
%I SUNY Buffalo Computer Science
%K AA08 AA07
%X This doctoral dissertation presents an automated
system for grading program quality based upon a mathematical model
of program quality.
Our research investigates whether such a system
will perform at least as well as, and perhaps even do better than,
human graders.
%R 85-06
%A J. G. Neal
%T A knowledge-based approach to natural language understanding
%D May 1985
%I SUNY Buffalo Computer Science
%K AI01 AI02
%X In this thesis we present a language processing expert system
that we have implemented in the role of an educable cognitive
agent whose domain of expertise is language understanding
and whose discourse domain includes its own language knowledge.
We present a representation of language processing knowledge
and a core of knowledge, including a Kernel Language, which forms
the knowledge base for this AI system.
%R 85-07
%A S. L. Hardt
%A J. Rosenberg
%A M. E. Haefner
%A K. S. Arora
%T The three ERIK\-AMVER progress reports
%D July 1985
%I SUNY Buffalo Computer Science
%K AA18
%X This is a collection of three progress reports
submitted by our group
to the U. S. Coast Guard.
The reports chart the development of the ERIK
(Evaluating Reports using Integrated Knowledge) system.
The systems design and implementation were orchestrated
by Jay Rosenberg.
The final report as well as the manuals for the system
can be found elsewhere.
%R 85-08
%A S. L. Hardt
%A J. Rosenberg
%T The ERIK project: Final report and manuals
%D July 1985
%I SUNY Buffalo Computer Science
%K AA18 H02
%X The ERIK system is a computer program
that was developed to interpret ship reports for the
United States Coast Guard.
The system is now completed an installed in the Coast Guard's
AMVER Center on Governors Island.
It was running in a testing mode on a dedicated DEC VAX-11/730
system running VMS, from February to June 1985.
The final system will be running on a Symbolics Lisp Machine
in July 1985.
This report provides a brief description of the project, the system,
and user manuals.
The latter contains a detailed description of the theory
behind the system and the necessary
implementation and maintenance information.
%A S. N. Srihari
%A J. J. Hull
%A P. W. Palumbo
%A D. Niyogi
%A C. H. Wang
%T Address recognition techniques in mail sorting:
Research directions
%R 85-09
%D August 1985
%I SUNY Buffalo Computer Science
%K AI06 AI02
%X This report is a discussion of techniques of computer vision,
pattern recognition, and language processing
relevant to the problem of mail sorting as well as a presentation of the
results of preliminary experiments with several new techniques applied
to letter mail images.
------------------------------
End of AIList Digest
********************
∂22-Apr-86 0605 LAWS@SRI-AI.ARPA AIList Digest V4 #98
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 22 Apr 86 06:04:49 PST
Date: Mon 21 Apr 1986 23:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #98
To: AIList@SRI-AI
AIList Digest Tuesday, 22 Apr 1986 Volume 4 : Issue 98
Today's Topics:
Bibliography - Recent Articles #8
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Articles #8
Definitions
D BOOK26 Qualitative Reasoning about Physcial Systems\
%E Daniel G. Bobrow\
%D 1985\
%I MIT PRESS\
%X 504 pages $$232.50 ISBN 02218-4
D BOOK27 Visual Cognition\
%E STeven Pinker\
%D 1985\
%I MIT PRESS\
%X 296 pages $17.50 paper ISBN 16103-6
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
%A Gyungho Lee
%A Clyde P. Kruskal
%A David J. Kuck
%T An Empirical Study of Automatic Restructuring of Nonnumerical Programs
for Parallel Processors
%J IEEE Transactions on Computers
%V C-34
%N 10
%P 927-933
%D OCT 1985
%K H03
%A W. Daniel Hillis
%T The Connection Machine
%I MIT Press
%D 1985
%K H03 AT15
%X $22.50 ISBN 08157-1 175 pages
%A Richard P. Gabriel
%T Performance and Evaluation of Lisp Machines
%I MIT Press
%D 1985
%K T01 H02 AT15
%X $22.50 ISBN 07093-6 350 pages
%A Michael J. O'Donnell
%T Equational Logic as a Programming Language
%I MIT Press
%D 1985
%K AI10 AT15
%X $25.00 ISBN 15028-X 300 pages
%A Ehud Y. Shapiro
%T Algorithmic Programming Debugging
%I MIT Press
%D 1983
%K T02 AT15
%X $35.00 ISBN 19218-7 232 pages
%A Marc H. Raibert
%T Legged Robots That Balance
%I MIT Press
%D 1985
%K AT15 AI07
%X $30.00 ISBN 18117-7 250 pages
%A Matthew T. Mason
%A J. Kenneth Salisbury, Jr.
%T Robot Hands and the Mechanics of Manipulation
%I MIT Press
%D 1985
%K AT15 AI07
%X $30.00 ISBN 13205-2 325 pages
%A A. Morecki
%A G. Bianchi
%A K. Kedzior
%T Theory and Practice of Robots and Manipulators
%I MIT Press
%D 1985
%K AT15 AI07
%X $45.00 ISBN 13208-7
%A Richard P. Paul
%T Robot Manipulators: Mathematics, Programming, and Control
%I MIT Press
%D 1981
%K AT15 AI07
%X $34.50 ISBN 16082-X 279 pages
%A HIdeo Hanafusa
%A Hirochika Inoue
%T Robotics Research: The Second International Symposium
%I MIT Press
%D 1985
%K AT15 AI07
%X $45.00 ISBN 08151-2 500 pages
%A Michael Brady
%A Richard Paul
%T Robotics Research: The First International Symposium
%I MIT Press
%D 1984
%K AT15 AI07
%X $65.00 ISBN 02207-9 1000 pages
%A James U. Korein
%T A Geometric Investigation of Reach
%I MIT Press
%D 1985
%K AT15 AI07
%X $30.00 ISBN 11104-7 210 pages
%A Michael Brady
%A John M. Hollerbach
%A Timothy L. Johnson
%A Thomas Lozano-Perez
%A Matthew T. Mason
%T Robot Motion: Planning and Control
%I MIT Press
%D 1983
%K AT15 AI07 AI09
%X 585 pages $39.50 ISBN 02182-X
%A Robert Berwick
%T The Acquisition of Syntactic Knowledge
%D 1985
%I MIT Press
%K AT15 AI02
%X 350 pages $27.50 ISBN 02226-5
%A Michael G. Dyer
%T In-Depth Understanding:
A Computer Model of Integrated Processing for Narrative Comprehension
%D 1983
%I MIT Press
%K AT15 AI02 AI08
%X ISBN 04073-5 458 pages $37.50
%A Mitchell P. Marcus
%T A Theory of Syntactic Recognition for Natural Language
%D 1980
%I MIT Press
%K AI02 AT15
%X ISBN 13149-8 335 pages $35.00
%A Henry S. Baird
%T Model-Based Image Matching Using Location
%D 1985
%I MIT Press
%K AI06 AT15
%X ISBN 02220-6 $25.00 115 pages
%A Harold Abelson
%A Gerald Jay Sussman
%T Structure and Interpretation of Computer Programs
%D 1984
%I MIT Press
%K Scheme T01 AT15
%X ISBN 01077-1 542 pages $34.95
%A Scott E. Fahlman
%T NETL: A System for Representing and Using Real-World Knowledge
%D 1979
%I MIT Press
%K H03 AT15
%X ISBN 06069-8 278 pages $27.50
%A Ellen Catherine Hildreth
%T Measurement of Visual Motion
%D 1984
%I MIT Press
%K AT15 AI06
%X ISBN 08143-1 241 pages $32.50
%A Herbert A. Simon
%T The Sciences of the Artificial
%D 1981
%I MIT Press
%K AT15
%X ISBN 69073-X 247 pages $6.95 paper
%A Michael Brady
%A Robert C. Berwick
%T Computational Models of Discourse
%D 1983
%I MIT Press
%K AI02 AT15
%X ISBN 02183-8 $37.50 403 pages
%A Marvin L. Minsky
%T Semantic Information Processing
%D 1969
%I MIT Press
%K AT15
%X ISBN 13044-0 $35.00 440 pages
%A Eric Leifur Grimson
%T From Images to Surfaces: A Computational STudy of the Human
Early Visual System
%D 1981
%I MIT Press
%K AT15 AI06 AI08
%X ISBN 07083-9 274 pages $35.00
%A Shimonn Ullman
%T The Interpretation of Visual Motion
%D 1979
%I MIT Press
%K AT15 AI06
%X ISBN 21007-4 229 pages $30.00
%A John Haugeland
%T Mind Design: Philosophy, Psychology, Artificial Intelligence
%D 1981
%I MIT Press
%K AT15 AI08
%X ISBN 58052-7 368 pages $10.95 paper
%A Daniel C. Dennet
%T Brainstorms: Philosophical Essays on Mind and Psychology
%D 1980
%I MIT Press
%K AT15 AI08
%X 353 pages ISBN 54037-1 $10.00 paper
Cloth: $30.00 ISBN 04064-6
%A Zenon W. Pylyshyn
%T Computation and Cognition: Toward a Foundation for Cognitive Science
%D 1984
%I MIT Press
%K AI08 AT15
%X 320 pages $27.50 ISBN 16098-6
%A D. Bobrow
%T Qualitative Reasoning about Physical Systems - An Introduction
%B BOOK26
%K AA16
%A J. de Kleer
%A J. Seely Brown
%T A Qualitative Physics Based on Confluences
%B BOOK26
%K AA16
%A K. Forbus
%T Qualitative Process Theory
%B BOOK26
%A B. Kuipers
%T Common Sense Reasoning about Causality: Deriving Behavior From Structure
%B BOOK26
%A J. de Kleer
%T How Circuits Work
%B BOOK26
%K AA04
%A B. C. Williams
%T Qualitative Analysis of MOS Circuits
%B BOOK26
%K AA04
%A R. Davis
%T Diagnostic Reasoning Based on Structure and Behavior
%B BOOK26
%A M. R. Genesereth
%T The Use of Design Descriptions in Automated Diagnosis
%B BOOK26
%A H. Barrow
%T VERIFY: A Program for Proving Correctness of Digital
Hardware Designs
%B BOOK26
%K AA04
%A Rachel Reichman
%T Getting Computers to Talk Like You and Me:
Discourse Context, Focus and Semantics
%D 1985
%I MIT PRESS
%X 144 pages $20.00 ISBN 18118-5
%A Steven Pinker
%T Visual Cognition: An Introduction
%B BOOK27
%K AI06 AI08
%A D. D. Hoffman
%A W. A. Richards
%T Parts of Recognition
%B BOOK27
%K AI06 AI08
%A Shimon Ullman
%T Visual Routines
%B BOOK27
%K AI06 AI08
%A Roger Shepard
%A Shelley Hurwitz
%T Upward Direction, Mentasl Rotation, and Discrimination of Left and Right
Turs in Maps
%B BOOK27
%K AI06 AI08
%A Stephen M. Kosslyn
%A Jennifer Brunn
%A Kyle R. Cave
%A Roger W. Wallach
%T Individual Differences in Mental Imagery: A Computational Analysis
%B BOOK27
%K AI06 AI08
%A Martha J. Farah
%T The Neurological Basis of Mental Imagery: A Computational Analysis
%B BOOK27
%K AI06 AI08
%A Jon Barwise
%A John Perry
%T Situations and Attitudes
%D 1983
%I MIT PRESS
%K AI02 AI11 AI08
%X 352 pages $9.95 paper ISBN 52099-0 Cloth $27.50 $27.50 ISBN 02189-7
%A N. Fuhr
%A G. E. Knorz
%T Retrieval Test Evaluation of a Rule Based Automatic Indexing
(AIR/PHYS)
%K AI01 AA14
%B Proceedings of the Third Joint BCS and ACM Symposium
%E C. J. van Rijsbergen
%I Cambridge University Press
%D 1984
%A W. S. Cooper
%T Bridging the Gap between AI and IR
%B Proceedings of the Third Joint BCS and ACM Symposium
%E C. J. van Rijsbergen
%I Cambridge University Press
%D 1984
%K AI01 AA14
%A Richard L. Derr
%T Linguistic Meaning and Language Comprehension
%J Information Processing and Management
%V 19
%N 6
%D 1983
%P 369-380
%K AI02
%A John O'Connor
%T Biomedical Citing Statements Computer Recognition and Use to Aid Full-Text Re
trieval
%J Information Processing and Management
%V 19
%N 6
%P 361-368
%D 1983
%K AI02 AI14
%A Martin Dillon
%A Laura K. McDonald
%T Fully Automatic Book Indexing
%J Journal of Documentation
%V 39
%N 3
%P 135-154
%D 1983
%K AI02 AI14
%A M. R. Cutkowsky
%A P. K. Wright
%T Active Control of a Compliant Wrist in Manufacturing Tasks
%R ASME Paper Number 850WA/Prod-15
%D 1985
%K AA05 AA07
%A Tony Owen
%T Assembly With Robots
%I Prentice Hall
%C Englewood Cliffs
%D 1985
%K AI07 AA05
%X $29.95
%A J. D. Gould
%A J. Conti
%A T. Hovanyecz
%T Composing Letters with a Simulated Listening Typewriter
%J Communications of the ACM
%D 1983
%V 26
%N 4
%P 295-308
%K AI05
------------------------------
End of AIList Digest
********************
∂24-Apr-86 0049 LAWS@SRI-AI.ARPA AIList Digest V4 #99
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 24 Apr 86 00:49:10 PST
Date: Wed 23 Apr 1986 22:23-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #99
To: AIList@SRI-AI
AIList Digest Thursday, 24 Apr 1986 Volume 4 : Issue 99
Today's Topics:
Seminars - Run-Length Code for Geographical Information (SMU) &
Logic in Design (SU) &
A VLSI Architecture for Chess (SU) &
Minimal Entailment (SU) &
Chronological Ignorance (SU) &
Refutational Completeness in Theorem Proving (UTexas) &
Interpreting Logic Programs on an FFP Machine (UPenn) &
The Non-Von Project (UPenn) &
A Mathematical Theory of Plan Synthesis (SRI),
Conference - American Control Conference &
AAAI Workshop on AI and Simulation
----------------------------------------------------------------------
Date: WED, 23 APR 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Seminar - Run-Length Code for Geographical Information (SMU)
A Spatial Knowledge Structure Based on Run-Length-Code for a Geographical
Information System
Speaker: Erland Jungert
Illinois Institute of Technology
Location: 315SIC, Southern Methodist University, CS
Time: 3PM
Run-Length-Code (RLC) is an example of a simple data structure used
mainly for compacting images. A method where RLC is used as an object
oriented data structure for Geographical Information Systems (GIS) will be
presented. The usage of this object structure as a basis for spatial
reasoning while regarding the RLC-objects as part of a spatial knowledge
structure will be discussed.
------------------------------
Date: Mon 21 Apr 86 13:35:09-PST
From: Christine Pasley <pasley@SRI-KL>
Subject: Seminar - Logic in Design (SU)
CS529 - AI In Design & Manufacturing
Instructor: Dr. J. M. Tenenbaum
Title: Logic: Application to Design Debugging, Diagnosis, And Test
Speaker: Narinder Singh
From: Stanford University
Date: Wednesday, April 23, 1986
Time: 4:00 - 5:30
Place: Terman 556
Abstract:
Logic programming is a software engineering methodology based on techniques
from the field of Artificial Intelligence. One builds a logic program by
describing the application area of the program and its goal, rather than
specifying the actions necessary to achieve the goal. In this talk we will
examine the use of logic to represent and reason about digital devices for
simulation, test generation, and diagnosis. Describing designs in logic
permits capturing high level design descriptions, reasoning with a single
description for a collection of tasks, and reasoning with incomplete
descriptions. In addition, logic provides a flexible interpreter for reasoning
about a design, e.g., it permits reasoning forwards and backwards through a
design, and generating single or multiple answers to a goal.
Visitors welcome!
------------------------------
Date: Mon 21 Apr 86 09:49:49-PST
From: Sharon Gerlach <CSL.GERLACH@SU-SIERRA.ARPA>
Subject: Seminar - A VLSI Architecture for Chess (SU)
This Friday, April 25, at 1:30 in CIS 101:
All the Right Moves: A VLSI Architecture for Chess
Carl Ebeling
Carnegie Mellon University
Hitech, the Carnegie-Mellon chess program that recently won the ACM
computer chess championship and owns a USCF rating of 2340, owes its
success in large part to an architecture that is used for both
move generation and position evaluation. Previous programs have been
subject to a tradeoff between speed and knowledge: applying
more chess knowledge to position evaluation necessarily slows the
search. Although the previous computer chess champions, Belle and
Cray Blitz, have demonstrated the importance of deep search, it is
clear that better knowledge is required for first-rate chess. With
this new architecture, Hitech is able to search both deeply and
knowledgeably.
We will first describe the design and implementation of the move
generator which uses fine-grained parallelism to reduce the time to
produce and order moves. By generating all moves for both sides,
this move generator is able to order moves based both on capture
information and an estimate of the safety of the destination square.
This effort is rewarded by smaller search trees since the efficiency
of the alpha-beta search depends on the order in which moves are
examined. Experiments show that Hitech search trees are within a factor
of 1.5 of optimal. Although the amount of hardware required is
substantial, this architecture is eminently suited to VLSI.
We then describe the requirements of position evaluation and discuss
how this architecture can be adapted to perform evaluation. This
will include the description of a VLSI implementation that we
propose for position evaluation. Finally we will describe the other
components of the chess machine and present some performance results
that indicate how well the hardware supports the search.
------------------------------
Date: Tue 22 Apr 86 10:09:41-PST
From: Anne Richardson <RICHARDSON@SU-SCORE.ARPA>
Subject: Seminar - Minimal Entailment (SU)
DAY: April 28, 1986
EVENT: AI Seminar
PLACE: Jordan 050
TIME: 4:15
TITLE: "Minimal Entailment and Non-Monotonic Reasoning"
PERSON: David W. Etherington
FROM: University of British Columbia
Circumstances commonly require that conclusions be
drawn (conjectured) even though they are not strictly warranted
by the available evidence.
Various forms of minimal entailment have been suggested
as ways of generating appropriate conjectures.
Minimal entailment is a consequence relation in which those
facts which hold in minimal models of a theory are considered
to follow from that theory.
Thus minimal entailment is less restrictive than the standard logical
entailment relation, which strongly constrains what evidence
may be taken as supporting a conclusion.
Different definitions of minimality of models yield different
entailment relations.
The talk will outline a variety of such relations.
Domain, Predicate, and Formula Circumscription [McCarthy 1978,
1980, 1984] are syntactic formalisms intended to capture these
relations.
We examine each from a semantic viewpoint, in the hope of
clarifying their respective capabilities and weaknesses.
Results on the consistency, correctness, and adequacy of
these formalisms will be presented.
While minimal entailment corresponds most directly to the
Closed-World Assumption
that positive information
not implicit in what is known can be assumed false
McCarthy and others have suggested applications of
circumscription to more general default reasoning tasks.
With this in mind, connections between minimal entailment and
Reiter's Default Logic will be sketched, if time permits.
In this connection, we will consider positive and negative
results due to Grosof and Imielinski, respectively.
------------------------------
Date: 22 Apr 86 1306 PST
From: Vladimir Lifschitz <VAL@SU-AI.ARPA>
Subject: Seminar - Chronological Ignorance (SU)
CHRONOLOGICAL IGNORANCE:
time, knowledge, nonmonotonicity and causation
Yoav Shoham
Yale University
Thursday, May 1, 4pm
Room 380X, Mathematics Building
We are concerned with the problem of reasoning about change within a
formal system. We identify two problems that arise from practical
considerations of efficiency and naturalness of expression: the
persistence problem (otherwise known as the frame problem, and a new,
but no less evil, initiation problem. In this talk we concentrate on
the latter one.
We propose a new logic that allows efficient and natural reasoning
about change and which avoids the initiation problem. The logic,
called the logic of chronological ignorance, is a fusion of recent
ideas on temporal logic, modal logic of knowledge, and nonmonotonic
logic.
We identify a special class of theories, called causal theories, and
show these have elegant model-theoretic properties which make
reasoning about causal theories very easy.
Finally, we contrast our logic with previous work on nonmonotonic
logics in computer science, and discuss its connection to the
philosophical literature on causation.
------------------------------
Date: Tue 22 Apr 86 09:59:26-CST
From: Ellie Huck <AI.ELLIE@MCC.ARPA>
Subject: Seminar - Refutational Completeness in Theorem Proving (UTexas)
A New Method For Establishing
Refutational Completeness in Theorem Proving
Jieh Hsiang
SUNY at Stony Brook
April 25 - 10:00am
Echelon I, Room 409
In this talk we present a new technique for establishing completeness
of refutational theorem proving strategies. Our method employs
semantic trees and, in contrast to most of the semantic tree methods,
is based on proof-by-refutation as opposed to proof-by-induction.
Thus, it works well on transfinite semantic trees (to be introduced)
as well as on finite ones. This method is particularly useful for
proving the completeness of the following strategies (without the need
of the functionally reflexive axioms):
Resolution + oriented paramodulation
P1-resolution + oriented paramodulation
Resolution with ordered predicates + oriented paramodulation
using clauses only containing the equality predicate
A version of an unfailing Knuth-Bendix algorithm
The EN-Strategy, a complete refutational method for first
order theory with equality based on the term rewriting method
The Manna-Waldinger Tableau method with inference rules for
special relations, where oriented paramodulation is an
improvement of paramodulation.
------------------------------
Date: Tue, 22 Apr 86 11:44 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Interpreting Logic Programs on an FFP Machine (UPenn)
University of Pennsylvania Colloquium
11:00am April 23, 1986 - 216 Moore School
"INTERPRETING LOGIC PROGRAMS ON AN FFP MACHINE"
Bruce T. Smith
University of North Carolina - Chapel Hill
This talk describes a strategy for interpreting logic programs (e.g. Prolog) on
Gyula Mago's FFP Machine. The FFP Machine is a small-grain parallel computer
designed to interpret Backus' FFP language. The question is how to fit logic
programs into the FFP Machine's string reduction style of operation without
losing potential parallelism. In each machine cycle, the FFP Machine
partitions itself into a set of virtual MIMD computers-- one for each innermost
FFP application. These virtual computers work independently to re-write their
FFP expressions.
In constrast with the standard approach to parallelism in logic programming,
i.e. communicating processes cooperating to search an AND/OR tree, this
approach represents the search as an FFP sequence and searches by creating
appropriate reductions that re-write sub-trees. OR-parallelism is provided by
expanding different branches of the tree. AND-parallelism is provided by
creating virtual computers that perform unification (by a version of the
Martelli and Montanari algorithm) over sets of conjoined goals.
------------------------------
Date: Wed, 23 Apr 86 16:24 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - The Non-Von Project (UPenn)
Colloquium - University of Pennsylvania
3:00pm 4-24-86, 216 Moore
THE NON-VON PROJECT: EXPERIMENTS WITH MASSIVELY PARALLEL MACHINES
David Elliott Shaw
Columbia University
NON-VON is a massively parallel non-von Neumann machine that has been shown to
support the extremely rapid execution of a wide range of computationally
intensive symbolic information processing tasks, including a number of
artificial intelligence applications. An early prototype called NON-VON 1,
which implements some, but not all of the features of the full architecture, is
presently operational at Columbia University.
Central to the NON-VON architecture is an active memory which is implemented
using custom VLSI chips, each containing eight 8-bit small processing elements.
A full-scale machine would contain hundreds of thousdands of small processing
elements, together with several hundred large processing elements, each based
on a conventional 32-bit microprocessor. NON-VON's processing elements are
physically interconnected in three ways, and can be dynamically reconfigured to
support a fourth logical communication topology. The machine is capable of
synchronous (SIMD), asynchronous (MIMD) and partitioned (multiple SIMD)
execution.
In this presentation, Professor Shaw will describe the organization of the
NON-VON programming techniques. Performance results in the areas of low-and
intermediate-level computer vision, database and knowledge base management, and
AI production systems will be presented.
------------------------------
Date: Wed 23 Apr 86 14:42:11-PST
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - A Mathematical Theory of Plan Synthesis (SRI)
TOWARD A MATHEMATICAL THEORY OF PLAN SYNTHESIS
Edwin P.D. Pednault (PEDNAULT@SRI-AI)
Stanford University and SRI International, AI Center
11:00 AM, MONDAY, April 28
SRI International, Building E, Room EJ228 (new conference room)
Classical planning problems have the following form: given a set of
goals, a set of allowable actions, and a description of the current
state of the world, find a sequence of actions that will transform the
world from its current state to a state in which all of the goals are
satisfied. This talk is a presentation of my thesis research, which
examines the question of how to solve such problems automatically.
The question will be addressed from a rigorous, mathematical
standpoint, in contrast to the informal and highly experimental
treatments found in most previous work. By introducing mathematical
rigor, it has been possible to unify many existing ideas in automatic
planning, showing how they arise from first principles and how they
may be applied to solve a much broader class of problems than had
previously been considered. In addition, a number of theorems have
been proved that further our understanding of the synthesis problem,
and a language has been developed for describing actions that combines
the notational convenience of STRIPS with the expressive power of the
situation calculus.
This talk will concentrate on my techniques for plan synthesis with
only a brief summary of the other contributions of my research.
A mathematical framework will be introduced, along with a number of
theorems that form the basis for the synthesis techniques.
These theorems will then be combined with a least-commitment search strategy
to obtain a solution method that unifies and generalizes means-ends
analysis, opportunistic planning, goal protection, goal regression,
constraint posting/propagation, hierarchical planning, and nonlinear
planning.
------------------------------
Date: 23 Apr 1986 23:53:49 EST
From: ALSPACH@USC-ISI.ARPA
Subject: Conference - American Control Conference
Write to me at this address for registration and housing reservation forms
for the 1986 ACC (American Control Conference) described in a previous
message.
------------------------------
Date: 23 Apr 1986 1029-PST
From: STELZNER@ALADIN
Subject: Conference - AAAI Workshop on AI and Simulation
AAAI Workshop on AI and Simulation
Simulation shares with AI a concern for effective world modeling.
Currently, some computer scientists are trying to build systems that
integrate the strengths of simulation and AI---AI's ability to represent
complex system models and to reason over those models, and simulation's
ability to model dynamic behavior. This initial work is already
being realized in commercial products. We intend that this workshop
bring together researchers in knowledge-based simulation, tool builders
who are developing simulation systems that combine AI and classical
techniques, and system designers who have built AI-based simulation
applications.
Topics to be discussed
Expert reasoning in simulation
Scenario construction for expert systems
Integration of AI techniques with conventional simulations
Graphical representation for simulation
Application of new hardware architectures for simulation
AI-based simulation tools
Knowledge representation formalisms for simulation
Simulation at multiple levels of abstraction
Automatic analysis of simulation results
Organizers
The workshop organizers are Arthur Gerstenfeld (Worcester
Polytechnic Institute), Richard B. Modjeski (U.S. Army Concepts Analysis
Agency), Ramana Reddy (West Virginia University and the Robotics
Institute, Carnegie-Mellon University) and Marilyn Stelzner, Chair
(IntelliCorp)
Participation
The workshop will take place on Monday, August 11 at the University of
Pennsylvania. Participation in the workshop is by invitation, limited to 35
participants. People wishing to be invited should submit {\bf five} copies of
a 1000-word abstract describing their work in AI and simulation to the
workshop Chair, Marilyn Stelzner, IntelliCorp, 1975 El Camino Real West,
Mountain View, California 94040 by May 30, 1986. Invitations will be issued
by July 1.
------------------------------
End of AIList Digest
********************
∂24-Apr-86 0310 LAWS@SRI-AI.ARPA AIList Digest V4 #100
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 24 Apr 86 03:10:36 PST
Date: Wed 23 Apr 1986 22:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #100
To: AIList@SRI-AI
AIList Digest Thursday, 24 Apr 1986 Volume 4 : Issue 100
Today's Topics:
Queries - Parallel Languages & Tutoring Systems &
Object-Oriented Support For Common Lisp & LISP Coding Standards,
Methodology - LISP Coding Standards & String Reduction & Shape,
Comments - Use of the Xerox Name & Search,
Philosophy - Consciousness,
Review - OpEd Seminar
----------------------------------------------------------------------
Date: Mon 21 Apr 86 09:09:03-EST
From: Michael van Biema <MICHAEL@CS.COLUMBIA.EDU>
Subject: Parallel Languages:
The Dado project at Columbia is in the process of preparing a paper in
which we hope to give a taxonomy of parallel programming languages.
We ask that you be so kind as to send us any papers on languages that
you may have implemented or that you are designing. This would be not
only very helpful to us, but useful to the community as well.
If your language is being designed to run on a particular architecture
please include a description of the particular architectural features
of the machine. Also, if you could briefly describe to us:
1) The state of development of your language.
2) The intended application area.
3) The intended or current user community.
4) Your thoughts on the current state of parallel language design.
What particular problems does your language address.
If you do not have time for the survey questions please do send a copy
of the papers or even just references to them! Thank you for your
time and we look forward to sending you a copy of this survey,
Michael van Biema
Columbia University
Dept. of Computer Science
New York, N.Y. 10027
------------------------------
Date: 23 Apr 1986 12:41-EST
From: Eswaran.Subrahmanian@H.CS.CMU.EDU
Subject: Tutoring Systems
I am currently creating a bibliography of computer aided tutoring
systems. I would like references to literature on both AI based
and non-AI based systems. I will be willing to send anybody who may
want a copy of the compiled bibliography.
Thanks in advance
Eswaran Subrahmanian
ARPA: eswaran@h.cs.cmu.edu.arpa
Postal: Eswaran Subrahmanian
DH 226 Design Research Center
Carnegie Mellon University
Pittsburgh Pa 15213.
------------------------------
Date: 18 Apr 86 20:14:56 GMT
From: ihnp4!houxm!whuxl!whuxlm!akgua!gatech!seismo!umcp-cs!aplcen!jhunix
!ins←amrh@ucbvax.berkeley.edu
Subject: LISP coding standards
Is anyone aware of any official LISP coding standards comparable to
the standards for Pascal, Ada, etc? Folks at my new employer have
been looking...
-Marty Hall.
Arpa: (preferred) hall@hopkins
CSnet: ("") hall.hopkins@csnet-relay
uucp: seismo!umcp-cs!jhunix!ins←amrh
allegra!hopkins!jhunix!ins←amrh
------------------------------
Date: 19 Apr 86
From: "Jennings, Richard" <jennings@lll-icdc.ARPA>
Subject: Object Oriented Support For Common Lisp
I am working on a project trying to couple a good programming
environment exploiting object oriented paradigms to a grid of
INMOS Transputers. Rather than build up everything from the
OCCAM development system, I would like to use the VAX LISP (a
variant of Common Lisp) environment augmented with a public
domain (preferably) object oriented package as a model for the
system I intend to build for the Transputers.
1) I would like pointers to environments which are compatible
(sit on top of) VAX LISP which directly support object oriented
programming;
2) notes from those who may be working on (or interested in)
such projects; and
3) responses sent directly to me since I do not have regular
access to AILIST. I will summarize.
Richard Jennings
PO Box 808 L-228 (L-228 is CRITICAL)
LLNL
Livermore, CA 94550
ARPA: preferred -> jennings%icdc@lll-crg
slow, reliable -> jennings@lll-crg
(INMOS is a company which has probably trademarked OCCAM and
TRANSPUTER)
------------------------------
Date: 22 Apr 86 16:03:00 GMT
From: pur-ee!uiucdcs!uiucdcsp!bsmith@ucbvax.berkeley.edu
Subject: Re: LISP coding standards
Look at Guy Steele's book Common Lisp. All Lisps seem to be going
in this direction, implementing as large a subset of Common Lisp as
is possible. It's my understanding that Symbolics will be releasing
a new version of its system that will default to Common Lisp this
summer (instead of having to specify it in the mode line).
------------------------------
Date: 22 Apr 86 03:17:10 GMT
From: ucdavis!lll-lcc!lll-crg!seismo!cit-vax!alfke@ucbvax.berkeley.edu
(J. Peter Alfke)
Subject: Re: String reduction
Organization : California Institute of Technology
Keywords:
In article <994@umn-cs.UUCP> amit@umn-cs.UUCP (Neta Amit) writes:
>In article <1031@eagle.ukc.ac.uk> sjl@ukc.ac.uk (S.J.Leviseur) writes:
>>Does anybody have any references to articles on string reduction
>>as a reduction technique for applicative languages (or anything
>>else)? They seem to be almost impossible to find! Anything welcome.
>
>String reduction as a model of computation was suggested by
>A.A.Markov, in his 1954(?) paper, and is proved to be equivalent in
>power to the other two general models of compution (Turing machine and
>the Lambda Calculus).
This sounds similar to Calvin Mooers' TRAC language of mid-sixties. That
language was based entirely on macro expansion; rather strange, but actually
a lot more powerful than the toy it first appeared to be.
There was also a language called SAM76 that showed up in 1976, that seemed
close enough to TRAC to warrant a lawsuit. It seemed identical in concept,
with only minor differences in syntax and function-names.
TRAC is pretty easy to implement; I have an incomplete version written in
C that I did some years back. I also have a paper on TRAC which is probably
long out of print by now.
--Peter Alfke
alfke@csvax.caltech.edu
------------------------------
Date: Tue, 22 Apr 86 15:14:13 est
From: franc%UPenn-Grasp%upenn.csnet@CSNET-RELAY.ARPA
Subject: shape
>Jerry Hobbs has asked me "What is a hook and what is a ring that we know
>the ring can hang on the hook?" More specifically, what do we have to
>know about hooks and rings in general (for default reasoning) and
>about a particular hook-like object and ring-like object (dimensions,
>radius of curvature, surface normals, clearances, tolerances, etc.)
>in order to say whether a particular ring may be placed on a particular
>hook and whether it is likely to stay in place once put there.
We believe that the ability to model categories or generic objects
would make questions like this easier.
We have approached the problem of category shape representation in the
context of model based object recognition i.e. "how can a computer vision
system recognize different coffee cups based on single category model of a
coffee cup?" Given that the most important common property of objects in a
category is their function, the shape of categorically related objects must
satisfy the same functional constraints. By analysing these constraints we
try to come up with a prototypical shape, and a set of allowable
deformations that account for variations within the category.
I have started thesis work on this topic recently with Dr. Ruzena Bajcsy.
Franc Solina
GRASP Laboratory
University of Pennsylvania
csnet: Franc@Upenn
------------------------------
Date: 22 Apr 86 08:51:55 PST (Tuesday)
From: McNelly.OsbuSouth@Xerox.COM
Subject: Re: Compuscan
When you say you "used originals and text that had been Xeroxed several
times," do you mean that you copied it on a Xerox copier? Or do you
mean that some sales rep in New York Xeroxed the text into an 820 PC,
then Xeroxed it across the country to the corporate office in Los
Angeles, where the manager Xeroxed all the sales reports together on a
Xerox 6085 (Daybreak) workstation, and then finally Xeroxed that
document to a Xerox 8040 (Raven) laser printer before Xeroxing the
document to an Xerox 8030 File Service for storage?
As someone who wears an "Office Manager's hat," you should know better
than to use Xerox as a verb...
John McNelly
Member Programming Staff
Information Systems Div, Xerox Corp.
------------------------------
Date: Fri, 18 Apr 86 09:21 PST
From: Tom Garvey <GARVEY@SRI-AI.ARPA>
Subject: Re: Non-trivial expert systems
I wish you amateur AI guys out there wouldn't try to exalt your own
understanding of the field by making snide, offhand remarks about its
founders: "(*sigh* - Can you tell my first AI course was taught out of
Nilsson's PROBLEM-SOLVING METHODS IN ARTIFICIAL INTELLIGENCE? Nilsson
thought all AI reduced to search.)" Just because you wrote a faulty
program that had no control over its search space is no reason to
conclude (as you apparently do) that search is not an appropriate method
for solving the problem. I would agree with Nilsson that search is a
pervasive aspect of most AI problems -- it is precisely the determinism
of most expert systems that makes them uninteresting from an AI
perspective.
Cheers,
Tom
------------------------------
Date: Fri 18 Apr 86 09:53:30-PST
From: Stephen Barnard <BARNARD@SRI-IU.ARPA>
Subject: performance considered insufficient
Are viruses conscious? How about protozoa, mollusks, insects, fish,
reptiles, and birds? Certainly some mammals are conscious. How about
cats, dogs, and chimpanzees? Does anyone maintain that homo sapiens
is the only species with consciousness?
My point is that consciousness is an emergent phenomenon. The more
complex the nervous system of an organism, the more likely one is to
ascribe consciousness to it. Computers, at present, are too simple,
regardless of performance. I would have no problem believing a
massively parallel system with size and connectivity of biological
proportions to be conscious, provided it did interesting things.
------------------------------
Date: Mon, 21 Apr 86 11:43:18 est
From: Nigel Goddard <goddard@rochester.arpa>
Reply-to: goddard@rochester.UUCP (Nigel Goddard)
Subject: Re: One more little thing
In article <8604152029.AA07125@bucsd.ARPA> tes@bostonu.CSNET writes:
>
>Nigel Goddard wrote in Volume 4 Issue 87
>
>> I meet [people] who I consider to be very "unconscious",
>> i.e. their stated explanations of their motives and actions
>> seem to me to completely misunderstand what I consider to
>> be the *real* explanations.
>
>What, by Jove, is a "*real* explanation" ??????????????????????
>I can't digest my food properly until I find out.
>
> Tom Schutz
> CSNET: tes@bu-cs
A *real* explanation is an explication of MY internal model, as opposed to
someone else's internal model. I trust you will suffer no longer.
Nigel Goddard
------------------------------
Date: Wed, 23 Apr 86 20:35:28 WST
From: munnari!wacsvax.oz!marke@seismo.CSS.GOV (Mark Ellison)
Reply-to: wacsvax!marke@seismo.CSS.GOV (Mark Ellison)
Subject: Re: More wrangling on consciousness
In article <8604180725.AA11124@ucbvax.berkeley.edu> "CUGINI, JOHN"
<cugini@nbs-vms.arpa> writes:
>At the technical level, I think it's simply wrong to dismiss
>brains as a criterion for consciousness - if mechanism M
>causes C (consciousness) and enables P (performance), then
>clearly it is an open question whether something that can do P,
>but does not have M, does or does not have C.
Mechanism M causes C? You know many people who (may) have brains, and
you have no DIRECT evidence that they are conscious.
You only have direct evidence of one case of C (barring ESP, etc.),
and no DIRECT evidence of that person's brain.
Except for the performances in each case.
>At the "gut" level I think the whole tenor of the reply misses
>the point that consciousness is a very "low-level", primitive
>sort of phenomenon. Do severely retarded persons have "the
>ability to learn to understand the *real* reasons for their
>actions...an ability to abstract and to make an internal model
>of the self" ? or cows, or cats? Yet no one, I hope, doubts
>that they are conscious (eg, can feel pain, experience shapes,
>colors, sounds).
We only know of their ability to feel pain, experience shapes, colors,
sounds, etc., by their reactions to those stimuli. In other words,
by their performance. But on the other hand their performance might
not involve abstract statements.
>This has very little to do with any clever
>information processing capabilities. And it is these "raw
>feelings" that a) are essential to what most people mean by
>consciousness and b) seem least susceptible to implementation by
>Lisp machines, regardless of size.
I would argue that "raw feelings" in others are known only by their
performance. In effect we egomorphise(I don't know the right word,
I mean something like anthropmorphise with regard to oneself) them.
And we (some of us) do the same to machines, if not so seriously.
`The <machine> is really struggling today.'
`The process is tired (niced).'
One criterion that I have not seen yet proposed is the following.
It is more useful to pretend that people are conscious than not.
They tend to cause you less pain, and are more likely to do what you want.
So I'll believe someone's 8600 or Cray is conscious if it works better,
according to whatever criteria I have for that at the moment, when I so
believe.
---
Mark Ellison lambda f . (lambda x . f x x) (lambda x . f x x)
Department of Computer Science, CSNet: marke@wacsvax.oz
University of Western Australia, ARPA: marke%wacsvax.oz@seismo.edu.gov
Stirling Highway, UUCP: ..!seismo!munnari!wacsvax!marke
Nedlands, Western Australia, 6009.
PHONE: (09) 380 2305 OVERSEAS: +61 9 380 2305
------------------------------
Date: Tue 22 Apr 86 12:55:27-CST
From: Aaron Temin <CS.Temin@R20.UTEXAS.EDU>
Subject: seminar reviews
Ken -
Given that ailist posts seminar announcements, I would like to
encourage folks who attend the seminar to post summaries/critiques/reviews
of them. Then we see the differences between what the researcher
hoped to accomplish v. what really seems to exist. I append a short
review of Sergio Alvarado's talk on his OpEd system, which I
just returned from hearing.
----
This is a review of the seminar given by Sergio Alvarado on his OpEd
system at the Univ. of Texas on 22 April. The announcement and
abstract have been posted to ailist previously.
The nub of the talk seemed to be that one is interested in
understanding arguments and the supports (beliefs) for the arguments.
Domain is letters to the editor. A belief seems to be an atomic
entity that implies another belief or supports an argument.
There are various arguments (strategies?), about 30 all together.
Alvarado calls the arguments ArgumentUnits, and an
editorial parses into an argument-graph.
The system parses English text into argument graphs, and can
"answer" questions from this. The example text was a short
(10 sentence) paragraph from an '82 letter by Milton Friedman
about import/export and the steel and automobile industries.
It parsed into two argument units and about 8 belief (though
in somecases there is "double counting" -- belief1 might be
"Friedman believes tariffs are good" and belief2 is just
the opposite "Reagan believes tariffs are bad" or whatever, as
the argument is modelled as having two opposing views.)
Alvarado hinted that there was another sample text but we
didn't see it. He didn't do any of the obvious extensions
e.g. using real-world knowledge of facts external to the
paragraph to make inferences about the argument.
I couldn't see much difference between this and the previous
work done by folks on understanding legal arguments.
-Aaron
------------------------------
End of AIList Digest
********************
∂26-Apr-86 0132 LAWS@SRI-AI.ARPA AIList Digest V4 #101
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Apr 86 01:32:10 PST
Date: Fri 25 Apr 1986 22:51-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #101
To: AIList@SRI-AI
AIList Digest Saturday, 26 Apr 1986 Volume 4 : Issue 101
Today's Topics:
Bibliography - References #1
----------------------------------------------------------------------
Date: 9 Apr 86 13:20:13 GMT
From: allegra!mit-eddie!think!harvard!seismo!mcvax!ukc!dcl-cs!nott-cs!
abc@ucbvax.berkeley.edu (Andy Cheese)
Subject: Bibliography - References #1
From Andy Cheese, CS Department, Nottingham University, UK:
ABDA76a
Abdali S.K.
An Abstraction Algorithm for Combinatory Logic
Journal of Symbolic Logic Vol 41, Number 1, March
1976
ABEL85a *
Abelson H. & Sussman G.J. with Sussman J.
Structure and Interpretation of Computer Programs
MIT Press
1985
ABEL?
Abelson H. & Sussman G.J.
Computation: An Introduction to Engineering Design
Massachusetts Institute of technology, U.S.A.
ABEL?
Abelson H. & Sussman G.J.
Scheme Demonstration Programs for Structure and Interpretation of
Computer Programs
Massachusetts Institute of Technology, U.S.A.
ABRA82a
Abramsky S.
SECD-M - A Virtual Machine for Applicative Multiprogramming
Computer Systems Lab, Queen Mary College, Nov 82
ABRA82b
Abramson H.
Unification-Based Conditional Binding Constructs
TR 82-7, Department of Computer Science,
Univ of British Columbia, Canada
August 1982
ABRA83a
Abramsky S.
On Semantic Foundations For Applicative Multiprogramming
Computer Systems Lab, Queen Mary College, 1983
ABRA83b
Abramson H.
A Prological Definition of HASL a Purely Functional Language With Unification
Based Conditional Binding Expressions
TR 83-8, Department of Computer Science,
Univ of British Columbia, Canada
July 26, 1983
ABRI85a
Abrial J.R.
Programming as a Mathematical Exercise
in HOA85a
1985
ACK79a
Ackerman W.B. & Dennis J.B.
VAL - Preliminary Reference Manual
MIT Laboratory for Computer Science, June 79
AIDA84a
Aida H. & Moto-oka T.
Performance Measurement of Parallel Logic Programming System "Paralog"
Dept. of Electrical Eng., University of Tokyo
ALEX85a *
Alexandridis N.A. & Bilalis N.A. & Tsanakas P.D.
Using Functional Programming For Hierarchical Structures in Image Processing
in Digital Techniques in Simulation, Communication and Control (IMACS)
(ed Tzafestas S.G. )
pp 175-181
North Holland
1985
ALLI85a *
Allison L.
Programming Denotational Semantics II
Computer Journal, Vol 28, no 5, pp 480-486
1985
AMAM82a
Amamiya M. & Takahashi N. & Naruse T. & Yoshida M.
A Data Flow Processor Array System for Solving Partial Differential Equations
Int. Symp. on Applied Mathematics and Information Science, March 1982
ARBI75a
Arbib M.A. & Manes E.G.
Arrows, Structures and Functors : The Categorical Imperative
Academic Press
1975
ARVI78a
Arvind & Gostelow K.P. & Plouffe W.
An Asynchronous Programming Language and Computing Machine
Dept. of Information and Computer Science, Tech Rep 114A
University of California Irvine, December 1978
ARVI83a
Arvind & Dertouzos M.L. & Iannucci R.A.
A Multiprocessor Emulation Facility
MIT Lab for Computer Science Technical Report 302
October 1983
ARVI84a
Arvind & Brock J.D.
Resource Managers in Functional Programming
Journal of Parallel and Distributed Computing 1, 5-21
1984
ARVI84b
Arvind & Kathail V. & Pingali K.
Sharing of Computation in Functional Language Implementations
Lab for Computer Science Tech Rep ??? (sic), 24 July
1984
ASHC76a *
Ashcroft E.A. & Wadge W.
Lucid - A Formal System For Writing and Proving Programs
SIAM J on Computing Vol 5 no 3, 1976
pp 336-354
1976
ASHC77a
Ashcroft E.A. & Wadge W.W.
LUCID, a Non-Procedural Language with Iteration
CACM Vol 20 No 7 p519-526 July 1977
ASHC82a
Ashcroft E.A. & Wadge W.W.
R for Semantics
ACM TOPLAS Vol 4 No 2 p283-294 April 1982
ASHC83a
Ashcroft E.A.
Proposal for a Demand-Driven Tagged Dataflow Machine
SRI Document Sept 1983
ASH85a
Ashcroft E.A.
Eazyflow Architecture
SRI Technical Report CSL-147, April 1985
ASH85b
Ashcroft E.A.
Ferds--Massive Parallelism in Lucid
Document
1985
ASH85c
Ashcroft E.A. & Wadge W.W.
The Syntax and Semantics of Lucid
SRI Technical Report CSL-147
April 1985
ASO84a
Aso M.
Simulator of XP's
ICOT Research Center, Technical Report TR-041
January 1984
ATKI83a *
Atkinson M.P & Bailey P.J. & Chisholm K.J. & Cockshott P.W. & Morrison R.
"An Approach to Persistent Programming"
The Computer Journal,Vol.26,No.4, pp 360-365
1983
AUGU84a *
Augustsson L.
A Compiler for Lazy ML
Proceedings of 1984 ACM Symposium on LISP and Functional Programming,
Austin, Texas
pp 218-227
August 1984
AZAR85a *
Azari H. & Veler Y.
Functional Language Directed Data Driven machine
Microprocessing and Microprogramming 16, pp 127-132
September/October 1985
BACK74a *
Backus J.
Programming Language Semantics and Closed Applicative Languages
ACM Symposium on Principles of Programming Languages, 1974
pp 71-86
1974
BACK78a *
Backus J.
Can Programming be liberated from the von-Neumann Style?
CACM Vol 21 No 8 p613-641 Aug 1978
BACK79a
Backus J.W.
On Extending The Concept Of Program And Solving Linear Functional Equations
Draft Paper Distributed at Summer Workshop on Programming Methodology,
University of California at Santa Cruz, August 1979
BACK81a
Backus J.W.
The Algebra of Functional Programs: Function Level Reasoning, Linear
Equations, and Extended Definitions
In "Formalization of Programming Concepts", LNCS 107
Springer Verlag
April 1981
BAKE78a
Baker, Henry B., Jr.
List Processing in Real Time on a Serial Computer
CACM 21 no 4, pp 280-294, 1978
BAKE78b
Baker H.G.
Actor Systems for Real Time Computation
MIT Laboratory for Computer Science, MIT/LCS/TR-197, March 1978
BAKK76a *
Bakker J.W. De
Semantics and Termination of Nondeterministic Recursive Programs
Proceedings 3rd International Colloquium on Automata Languages and Programming
pp 435-477
Edinburgh University Press, 1976
BAKK79a *
Bakker J.W. De & Zucker J.I.
Derivatives of Programs
mathematisch centrum iw 116/79
1979
BAKK80a
Bakker J.De
Mathematical Theory of Program Correctness
Prentice Hall International Series in Computer Science, 1980
BARA85a *
Barahona P. & Gurd J.R.
Processor Allocation in a Multi-Ring Dataflow Machine
Dept of Comp Sci, Univ of Manchester, Technical Report UMCS-85-10-3
1985
BARB84a *
Barbuti R. & Bellia M. & Levi G. & Martelli M.
On the Integration of Logic Programming and Functional Programming
IEEE 1984 International Symposium on Logic Programming, pp 160-167
6 February 1984
BARE81a
Barendregt H.P.
The Lambda Calculus, Its Syntax and Semantics
North Holland 1981
BARR85a *
Barringer H.
Up and Down the Temporal Way
Dept of Comp Sci, Univ of Manchester, Technical Report UMCS-85-9-3
September 4, 1985
BCS86a *
British Computer Society Reading Branch Parallel Processing Seminar,
Proceedings
Tuesday 21st January 1986
BELL80a *
Bellia M & Degano P. & Levi G.
A Functional Plus Predicate Logic Programming Language
Proceedings of the Logic Programming Workshop, 14 July 1980
pp 334-347
1980
BERG79a *
Bergstra J.A. & Tucker J.V.
Algebraic Specifications of Computable and Semi-Computable Data Structures
mathematisch centrum iw 115/79
1979
BERG79b *
Bergstra J.A. & Tiuryn J. & Tucker J.V.
Correctness Theories and Program Equivalence
mathematisch centrum iw 119/79
1979
BERG79c *
Bergstra J.A. & Tucker J.V.
A Characterisation of Computable Data Types By Means of a Finite, Equational
Specification Method
mathematisch centrum iw 124/79
1979
BERG81a *
Bergstra J.A. & Tucker J.V.
Hoare's Logic and Peano's Arithmetic
Mathematisch Centrum iw 160/81
1981
BERK75a *
Berkling K.
Reduction Languages For Reduction Machines
Proc. 2nd Int. Symp. on Comp. Arch., pp 133-140
also available as an extended version as GMD Tech Rep ISF-76-8
14 September 1976
1975
BERK76a *
Berkling K.J.
A Symmetric Complement To The Lambda Calculus
GMD Tech Rep ISF-76-7
14 September 1976
BERK82a *
Berkling K.J.
A Consistent Extension of the Lambda-Calculus as a Base for Functional
Programming Languages
Information and Control, vol 55, nos 1-3 oct/nov/dec 1982, pp 89-101
Academic Press
1982
BERL84a
Berliner H. & Goetsch G.
A Quantative Study of Search Methods and the Effect of
Constraint Satisfaction
CMU-CS-84-147
Dept of Comp Sci, Carnegie-Mellon Univ.
July 1984
BERN80a *
Bernstein A.J.
Output Guards and Nondeterminism in Communicating Sequential Processes
ACM Transactions on Programming Languages and Systems, Vol 2, No 2,
pp 234 - 238
April 1980
BERR77a *
Berry G. & Levy J-J.
Minimal and Optimal Computations of Recursive Programs
4th ACM Symposium on Principles of Programming Languages
pp 215-226
1977
BERT84a
ed. Bertolazzi P
VLSI: Algorithms and Architectures
North Holland 1984
BETZ85a *
Betz D.
XLISP: An Experimental Object Oriented Language Version 1.4
January 1, 1985
BIC85a *
Bic L.
Processing of Semantic Nets on Dataflow Architectures
Artificial Intelligence 27
pp 219 - 227
1985
BIRD76a
Bird R.S.
Programs & Machines- An Introduction to the Theory of Computation
Wiley 1976
BIRD83a
Bird R.S.
Some Notational Suggestions for Transformational Programming
Tech Rep no 153, Univ. of Reading, 1983
BIRD84a
Bird R.S.
Using Circular Programs to Eliminate Multiple Traversals of Data
Acta Informatica Vol21 Fasc 3 1984 p239-250
BISH77a
Bishop P.B.
Computer Systems with a Very Large Address Space and Garbage Collection
MIT Laboratory for Computer Science, MIT/LCS/TR-178, May 1977
BOBR80a
Bobrow D.G.
Managing Reentrant Structures Using Reference Counts
ACM Trans. on Programming Languages and Systems, 2, no 3, pp 269-273
1980
BOHM81a *
Bohm A.P.W. & Leeuwen J. Van
A Basis for Dataflow Computing
Dept of Computer Science, Univ of Utrecht, Tech Rep RUU-CS-81-6
1981
BOHM85a *
Bohm A.P.W. & Gurd J.R. & Sargeant J.
Hardware and Software Enhancement of the Manchester Dataflow Machine
Document, Dept of Computer Science, Univ. of Manchester
BORN81a *
Borning A. & Bundy A.
Using Matching in Algebraic Equation Solving
Dept of Comp Sci, Univ of Washington, Technical Report No. 81-05-01
May 1981
BOSS84a *
Bossi A. & Ghezzi C.
Using FP As A Query Language For Relational Data-Bases
Computer Languages, Vol 9, No 1, pp 25-37
1984
BOWE79a *
Bowen K.A.
Prolog
Proceedings of the Annual Conference of the ACM 1979
pp 14-23
1979
BOW81a
Bowen D.L.
Implementation of Data Structures on a Data Flow Computer
PhD Thesis, Dept of Comp Sci, Univ. of Manchester, April 1981
BOWE85a
Bowen K.A.
Meta-Level Programming and Knowledge Representation
New Generation Computing, Vol 3, No 4, pp 359-383
1985
BOYE75a
Boyer R.S. & Moore J.S.
Proving Theorems about LISP Functions
JACM Vol 22,No. 1, p129-144
BRAI83
Brain S.
The Transputer-"exploiting the opportunity of VLSI"
Electronic Product Design, December 1983
BRAI84a
Brain S.
Applying the Transputer
Electronic Product Design, January 1984
BRAI84b
Brain S.
Writing Parallel Programs in OCCAM
Electronic Product Design, Sept 1984
BRAM84a *
Bramer M. & Bramer D.
The Fifth Generation, An Annotated Bibliography
Addison-Wesley Publishing Co., 1984
BROO84a
Brookes S.D.
Reasoning About Synchronous Systems
CMU-CS-84-145
Dept of Comp Sci, Carnegie-Mellon Univ.
March 1984
BROW84a
Brownbridge D.
Recursive Structures in Computer Systems
PhD Thesis, Univ. of Newcastle upon Tyne, 1984
BROY82a
eds Broy M. & Schmidt G.
Proceedings of Nato Summer School on Theoretical Foundations of
Programming Methodology, Munich,
Dordrecht: Reidel, 1982
BROY82b
Broy M.
A Fixed Point Approach to Applicative Multiprogramming
in BROY82a, pp 565-624
1982
BROY83a
Broy M.
Applicative Real-Time Programming
Proc. 9th IFIP, Information Processing 1983, pp 259-264
North Holland 1983
BROY85a *
Broy M.
On The Herbrand-Kleene Universe For Nondeterministic Computations
Theoretical Computer Science, 36, pp 1 - 19
March 1985
BRUI81a *
Bruin A. De
On the Existence of Cook Semantics
Mathematisch Centrum iw 163/81
1981
BRUI85a *
Bruin A. De & Bohm W.
The Denotational Semantics of Dynamic Networks of Processes
ACM Transactions on Programming Languages and Systems, Vol 7, No 4,
pp 656-679
October 1985
BRUY83a *
Bruynooghe M. & Pereira L.M.
Deduction revision by Intelligent Backtracking
Universidade Nova de Lisboa, report no UNL-10/83
July 1983
BRYA85a *
Bryant R.E.
Symbolic Verification of MOS Circuits
1985 Chapel Hill Conference on VLSI
pp 419-438
1985
BUND85a *
Bunder M.W.
An Exension of Klop's Counterexample to the Church-Rosser Property to
Lambda-Calculus With Other Ordered Pair Combinators
Theoretical Computer Science 39, pp 337-342
North Holland
August 1983
BUNE82a
Buneman P. Frankel R.E. & Nikhil R.
An Implementation Technique for Database Query Languages
ACM TODS Vol 7 No. 2 p164-186 June 1982
BURG75a
Recursive Programming Techniques
Addison Wesley Publising Co., 1975
BURN85a *
Burn G.L. & Hankin C.L. & Abramsky S.
The Theory and Practise of Strictness Analysis for Higher Order Functions
Research Report DoC 85/6
Dept of Computing, Imperial College
April 1985
BURS69a
Burstall R.M.
Proving Properties of Programs by Structural Induction
Computer Journal 12, p41
1969
BURS77a
Burstall R.M. & Darlington J.
A Transformation System for Developing Recursive Programs
JACM Vol 24,No. 1,p44-67
BURS77b
Burstall R.M.
Design Considerations for a Functional Programming Language
pp 54-57
Proc. Infotech State of the Art Conference, Copenhagen, 1977
BURS80a *
Burstall R.M. & MacQueen D.B. & Sannella D.T.
HOPE: An Experimental Applicative Language
Proc of LISP Conference Aug 1980
(Also Edinburgh report CSR-62-80, 1981)
BURS82a
Burstall R.M. & Goguen J.A.
Algebras, Theories and Freeness: An Introduction For Computer Scientists
in BROY82a, pp 329-348
1982
BURS84a *
Burstall R.M.
Programming with Modules as Typed Functional Programming
Proc. Int. Conf. on Fifth Gen. Computing Systems, Tokyo, Nov 84
BURT84a
Burton F.W.
Annotations to Control Parallelism and Reduction Order in the Distributed
Evaluation of Functional Programs
ACM TOPLAS Vol 6 No. 2 April 1984 p159-174
1984
BURT85a *
Burton F.W. & Huntbach M.M. & Kollias J.G.
Multiple Generation Text Files Using Overlapping Tree Structures
Computer Journal, Vol 28, no 4, pp 414-416
1985
BUSH79a
Bush V.J.
A Data Flow Implementation of Lucid
Msc Dissertation, Dept of Comp Sci, Univ. of Manchester, October 1979
BYTE85a *
Byte Magazine, August 1985.
Special Issue on Declarative Languages
1985
CAMP84a *
ed. Campbell J.A.
Implementations of Prolog
Ellis Horwood Series Artificial Intelligence
Ellis Horwood 1984
CARD?? *
Cardelli L.
A Semantics of Multiple Inheritance
CARD84a *
Cardelli L.
Compiling a Functional Language
Proceedings of 1984 ACM Symposium on Lisp and Functional Programming,
Austin, Texas
pp 208-217
August 1984
CARD85a *
Cardelli L.
Amber
Proceedings of the Treizieme Ecole de Printemps d'Informatique Theorique,
Le Val D'Ajol, Vosges, France
May 1985
CARD??
Cardelli L.
The Amber Machine
CART79a *
Cartwright R. & McCarthy J.
First Order Programming Logic
Proceedings ACM 6th Symposium on Principles of Programming Languages
pp 68-80
1979
CART83a *
Cartwright R. & Donahue J.
The Semantics of Lazy (and Industrious) Evaluation
CSL-83-9 , Xerox PARC 1983
CAT81a
Catto A.J.
Nondeterministic Programming in a Dataflow Environment
PhD thesis, Dept of Comp Sci, Univ. of Manchester, June 1981
CHAM84a *
eds. Chambers F.B. & Duce D.A. & Jones G.P.
Distributed Computing
Apic Studies in Data Processing no 20
Academic Press, 1984
CHAN84a
Chang J.H. & DeGroot D.
AND-Parallelism of Logic Programs Based on Static Data Dependency Analysis
Dept. of Electrical Eng. & Computer Sci, Univ. of California,Berkely,Sept 1984
CHEE85a *
Cheese A.B.
The Applicability of SKI(BC) Combinators in a Parallel Rewrite Rule Environment
Msc Thesis
Department of Computer Science, University of Manchester
October 1985
CHES80a *
Chester D.
HCPRVR: An Interpreter for Logic Programs
Proc 1st Annual National Conference on Artificial Intelligence
pp 93-95
1980
CHEW80a *
Chew P.
An Improved Algorithm for Computing with Equations
IEEE 21st Annual Symposium on Foundations of Computer Science
pp 108-117
1980
CHEW81a *
Chew P.
Unique Normal Forms in Term Rewriting Systems with Repeated Variables
13th Annual ACM Symposium on Theory of Computing (STOC)
pp 7-18
1981
CHIK83a
Chikayama T.
ESP as Preliminary Kernel Language of Fifth Generation Computers
( Also in New Generation Computing, Vol 1, No 1, 1983 )
ICOT Research Center, Technical Report TR-005
1983
CHUR41a
Church A.
The Calculi of Lambda-Conversion
Princeton University Press, Princeton, N.J., 1941
CLAC85a *
Clack C. & Peyton-Jones S.
Strictness Analysis - A Practical Approach
in Proc. IFIP Conf. on Functional Programming Languages and
Computer Architecture, Sept 16-19 '85, Nancy, France
1985
CLAR77a
Clark K.L. & Sickel
Predicate Logic: A Calculus For Deriving Programs
Proc. 5th Int. Joint Conf. on Artif. Intell., Cambridge, Mass 1977
CLAR77b
Clark K.L. & Tarnlund S. -A.
A First Order Theory of Data and Programs
Proc. IFIP 1977, pp 939-944
Amsterdam: North Holland
CLAR78a
Clark K.L.
Negation As Failiure
In "Logic and Databases", pp 293-322
New York: Plenum Press, 1978
CLAR79a
Clark K.L. & McCabe F.
The Control Facilities of IC-Prolog
Internal Report, Dept of Computing, Imperial College
1979
CLAR79b
Clark D.W.
Measurements of Dynamic List Structure Use in LISP
IEEE TOSE Vol SE-5 No 1, Jan 1979
CLAR80a
Clark K.L. & Darlington J.
Algorithm Classification Through Synthesis
Computer Journal, 61-65, 1980
CLAR80b *
Clarke J.W. & Gladstone P.J.S. & Maclean C.D. & Norman A.C.
SKIM - S,K,I Reduction Machine
Proceedings LISP Conference, Stanford, 1980
CLAR80c *
Clark J.H.
Structuring A VLSI System Architecture
Lambda, second quarter, 1980 , pp25-30
1980
CLAR80d *
Clark K.L. & McCabe F.G.
IC-PROLOG: Aspects of its Implementation
Proceedings of Logic Programming Workshop, Debrecen
1980
CLAR81a
Clark D.W. & Lampson B.W. & McDaniel G.A. & Ornstein S.M.
The Memory System of a High-Performance Personal Computer
CSL-81-1 , Xerox PARC, Jan 1981
CLAR82a *
Clark K.L. & Tarnlund S. -A.
Logic Programming
London: Academic Press, 1982
CLAR82b *
Clark T.S.
S-K Reduction Engine For An Applicative Language
Dept of Comp Sci, University of Illinois at Urbana-Champaign
Report no UIUCDCS-R-82-1119, UILU-ENG 82 1741
December 1982
CLAR83a *
Clark K. & Gregory S.
PARLOG: A Parallel Logic Programming Language (Draft)
Research Report DOC 83/5, Dept. of Computing, Imperial College
CLAR84a *
Clark K. & Gregory S.
PARLOG: Parallel Programming in Logic
Research Report DOC 84/4, Dept. of Computing, Imperial College
CLAR84b
Clark K.L. & McCabe F.G.
Micro-Prolog: Programming in Logic
Prentice Hall International Series in Computer Science
January 1984
CLAR85a
Clarke E.M. Jr.
The Characterization Problem For Hoare Logics
in HOA85a
1985
CLAY84a
Clayton B.D.
ART Programming Primer
Inference Corporation, 1984
CLOC81a *
Clocksin W.F. & Mellish C.S.
Programming in PROLOG
Springer Verlag 1981 (2nd Edition 1984)
CLOC83a *
Clocksin W.F.
Hortus Logico-Calculus
Notes for Tutorial Session on Declarative Languages and Architectures 1983
CLOC83b *
Clocksin W.F.
The ZIP Virtual Machine
Computer Laboratory, University of Cambridge
CLOC84a
Clocksin W.F.
Memory Representation Issues for Prolog Implementation
Computer Laboratory, University of Cambridge
CLOC84b *
Clocksin W.F.
Notes on FlexiFlow
Computer Laboratory, University of Cambridge Jan. 1984
CLOC84c *
Clocksin W.F.
On a Declarative Constraint Language
Computer Laboratory, University of Cambridge Jan. 1984
CLOC84d *
Clocksin W.F.
What is Prolog-X?
Computer Laboratory, University of Cambridge
CLOC85a *
Clocksin W.F.
Implementation Techniques for Prolog Databases.
Software - Practise and Experience Vol 15(7), pp 669-675
July 1985
CLOC85b *
Clocksin W.F.
Logic Programming and the Specification of Circuits
Computer Laboratory, University of Cambridge
Technical Report no 72
1985
COEL83a *
Coelho H.
Prolog: A Programming Tool For Logical Domain Modelling
in Processes and Tools for Decision Suport
(ed Sol H.G.), pp 37-45
North Holland
1983
COHE81a
Cohen J.
Garbage Collection of Linked Data Structures
ACM Computing Surveys Vol 13 No.3 Sept 1981, pp 341-367
COLL60a
Collins G.E.
A Method For Overlapping and Erasure of Lists
CACM 3, no 12, pp 655-657
1960
COLM73a
Colmerauer A. & Kanoui H. & Pasero R. & Roussel P.
Un Systeme de Communication Homme-machine en Francais
Group Intelligence Artificielle
Universite d,Aix Marseille, Luminy, 1973
CONE83a
Conery J.S.
The AND/OR Process Model for Parallel Execution of Logic Programs
Phd Dissertation, Univ of California, Irvine,
Tech rep 204, Information and computer science
1983
COOM84a *
ed. Coombs M.J.
Developments in Expert Systems
Academic Press 1984
CORN79a *
Cornish M. et al
The TI Data Flow Architectures: The Power of Concurrency For Avionics
Proc. 3rd Digital Avionics Systems Conf., pp 19-25
November 1979
CORY84a
Cory H.T. & Hammond P. & Kowalski R.A. & Kriwaczek F. & Sadri F.
& Sergot M.
The British Nationality Act As A Logic Program
Dept of Computing, Imperial College, London
1984
COST84a *
Costa G.
A Metric Characterization of Fair Computations in CCS
Department of Computer Science, University of Edinburgh
Internal Report CSR-169-84
October 1984
COST85a *
Costa G. & Stirling C.
Weak and Strong Fairness in CCS
Department of Computer Science, University of Edinburgh
Internal Report CSR-167-85
January 1985
------------------------------
End of AIList Digest
********************
∂26-Apr-86 0343 LAWS@SRI-AI.ARPA AIList Digest V4 #102
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Apr 86 03:43:06 PST
Date: Fri 25 Apr 1986 23:01-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #102
To: AIList@SRI-AI
AIList Digest Saturday, 26 Apr 1986 Volume 4 : Issue 102
Today's Topics:
Bibliography - References #2
----------------------------------------------------------------------
Date: 9 Apr 86 13:20:13 GMT
From: allegra!mit-eddie!think!harvard!seismo!mcvax!ukc!dcl-cs!nott-cs!
abc@ucbvax.berkeley.edu (Andy Cheese)
Subject: Bibliography - References #2
COX83a
Cox Brad J.
Object Oriented Programming in C
Unix review, October/Novemeber 1983 Page 67
COX84a
Cox Brad J.
Object Oriented Programming in C
Unix Review, February/March 1984 Page 56
COUR82a
Courcelle B.
Fundamental Properties of Infinite Trees
in BROY82a, pp 417-470
1982
COUR84a
ed. Courcelle B.
Ninth Colloquium on Trees in Algebra and Programming
CUP 1984
COUS85a
Cousineau G. & Curien P. -L. & Mauny M.
The Categorical Abstract Machine
CNRS-Universite Paris VII LITP
85-8
January 1985
CRAM *
Crammond J.A. & Miller C.D.F.
An Architecture For Parallel Logic Languages
2nd International Logic Programming Conference
pp 183-194
CRAM85a *
Crammond J.A.
A Comparative Study of Unification Algorithms for OR←Parallel Execution of
Logic Languages
IEEE Transactions on Computers, Vol c-34, no 10, pp 911-917
October 1985
CURI85a
Curien P. -L.
Typed Categorical Combinatory Logic
CNRS-Universite Paris VII LITP
85-15
February 1985
CURI85b
Currien P. -L.
Categorical Combinators, Sequentials Algorithms and Functional Programming
CNRS-Universite Paris VII LITP
85-26
March 1985
CURR58a
Curry H.B. & Feys R.
Combinatory Logic, Vol 1
North Holland Publishing Company, Amsterdam, 1958
CURR72a
Curry H.B & Hindley J.R. & Seldin J.P.
Combinatory Logic, Vol II
North Holland Publishing Company, Amsterdam, 1972
DA83a
Da Silva J.G.D. & Watson I.
A Pseudo Associative Store with Hardware hashing
Proc. IEE, Part E, 1983
DAM82a
Damas L. & Milner R.
Principal Type Schemes For Functional Programs
Proc. ACM Symposium on Principles of Programming Languages,
pp 207-212, 1982
DARL75a
Darlington J.
Application of Program Transformation to Program Synthesis
Proc of International Symposium on Proving and Improving Programs,
Arc et Senans, France
1975
DARL76a
Darlington J. & Burstall R.M.
A System that Automatically Improves Programs
Acta Informatica, Vol 6,p41-60
DARL77a
Darlington J.
Program Transformation and Synthesis Present Capabilities
Report 77/43
Dept of Computing, Imperial College
(Also in Artificial Intelligence Journal Vol 16, 1981)
1977
DARL79a
Darlington J.
A Synthesis of Several Sorting Algorithms
Acta Informatica, Vol 11, no 1
1979
DARL80a
Darlington J.
An Abstract Scheme For a Multiprocessor Implementation of Applicative
Languages
Proc. of Joint SRC/Newcastle Univ. Workshop on VLSI,
Machine Architecture and Very High Level Languages
1980
DARL80b
Darlington J.
Synthesis of Implementations For Abstract Data Types
Report 80/4
Dept of Computing, Imperial College
1980
DARL80c
Darlington J.
The Design of Efficient Data Representations
Dept of Computing, Imperial College
1980
DARL81a
Darlington J.
The Structured Description of Algorithm Derivations
To Appear in Amsterdam Conf. on Algorithms
October 1981
DARL81b *
Darlington J. & Reeve M.
ALICE- A Multi-Processor Reduction Machine for the Parallel Evaluation
of Applicative Languages
Proc of 1981 ACM Conf on Functional Programming Languages & Computer
Architecture
DARL82a
Darlington J. & Henderson P. & Turner D.A.
Functional Programming and its Applications- An Advanced Course
Cambridge University Press 1982
DARL82b
Darlington J.
Program Transformation
in DARL82a
1982
DARL83a *
Darlington J.
The New Programming:Functional & Logic Languages
Distributed Computing- A Review for Industry, SERC, Manchester 1983
DARL83b *
Darlington J. & Reeve M.
ALICE- and the Parallel Evaluation of Logic Programs
Invited Paper, 10th Annual Int. Symposium on Computer Architecture,1983
DARL83c *
Darlington J.
Unification of Logic and Functional Languages
Dept. of Computing, Imperial College, Date Unknown
DARL85a *
Darlington J. & Field A.J. & Pull H.
The Unification of Functional and Logic Languages
Department of Computing, Imperial College
Doc 85/3
February 1985
DAVI78a
Davis A.L.
The Architecture and System Method of DDM1: A Recursively Structured
Data Driven Machine
Proc. 5th Int. Symp on Comp. Arch., pp 210-215
April 1978
DEGR84a
DeGroot D.
Restricted And-Parallelism
Proc. Int. Conf. 5th Generation Computer Systems, 1984,
pp 471-478
1984
DEGR85a *
DeGroot D.
Alternate Graph Expressions for Restricted And-Parallelism
IEEE Spring Compcon 1985, pp 206-210
1985
DEGR85b *
DeGroot D. & Chang J-H
Une Comparison de Deux Modeles d'Execution de Parallelisme "et" a
Comparison of Two And-Parallel Execution Models
Hardware and Software Components and Architectures for the 5th
Generation, March 5-7 1985, pp 271-280
1985
DELI79a
Deliyanni A. & Kowalski R.A.
Logic and Semantic Networks
CACM Vol 22,No 3,p184-192
DEN75a
Dennis J.B. & Misunas D.P.
A Preliminary Architecture for a Basic Dataflow Processor
Proc. 2nd Annual Symposium on Computer Architecture
SIGARCH vol 3, no 4 , Jan 75, pp 126-132
1975
DEN79a
Dennis J.B.
The Varieties of Data Flow Computers
MIT Computation Structures Group, Memo 183, August 1979
DELV85a *
Delves L.M. & Mawdsley S.C.
DAP-Algol: A Development System for Parallel Algorithms
Computer Journal, Vol 28, no 2, pp 148-153
1985
DERT84a
Derthick M.
Variations on the Boltzmann Machine Learning Algorithm
CMU-CS-84-120
Dept of Comp Sci, Carnegie-Mellon Univ
August 1984
DETT86a *
Dettmer R.
Flagship A Fifth Generation Machine
Electronics and Power, pp 203-208
March 1986
DEU76a
Duetsch & Peter L. & Bobrow & Daniel G.
An Efficient, Incremental, Automatic Garbage Collector
CACM Vol 19,no 9, pp 522-526, 1976
DIJK82a
Dijkstra E.W.
Lambek and Moser Revisited
in BROY82a, pp 19-22
1982
DIJK82b
Dijkstra E.W.
Repaying our Debts
in BROY82a, pp 135-141
1982
DIJK82c
Dijkstra E.W.
A Tutorial on the Split Binary Semaphore
in BROY82a, pp 555-564
1982
DIJK85a
Dijkstra E.W.
Invariance and Non-Determinacy
in HOA85a
1985
DONA85a *
Donahue J. & Demers A.
Data Types Are Values
ACM Transactions on Programming Languages and Systems, vol 7, no 3
pp 426-445
July 1985
DOWN76a
Downey P.J. & Sethi R.
Correct Computation Rules For Recursive Languages
SIAM Journal of Computing 5(3), pp 378-401, September 1976
DUCE84a *
ed. Duce D.A.
Distributed Computing Systems Programme
IEE Digital Electronics and Computing Series no 5
Peter Peregrinus Ltd., 1984
DUCK85a *
Duckworth R.J. & Brailsford D.F. & Harrison L.
A Structured Data Flow Computer
Internal Report, Comp Sci Group, Univ of Nottingham
October 14, 1985
EGA79a
Egan G.K.
Data Flow: Its Applications to Decentralised Control
PhD Thesis, Dept of Comp Sci, Univ. of Manchester, 1979
ELIT84a
eds. Elithorn A. & Banerji R.
Artificial and Human Intelligence: Symposium
North Holland 1984
ENNA82a
Ennals J.R.
Beginning Micro-Prolog
Ellis Horwood Series Artificial Intelligence
Ellis Horwood Ltd., 1982
ENOM84a
Enomoto H. & Yonezaki N. & Saeki M. & Chiba K. & Takizuka T. & Yokoi T.
Natural Language Based System Development System TELL
ICOT Research Center, Technical Report TR-067
June 1986
ENOM84b
Enomoto H. & Yonezaki N. & Saeki M.
Formal Specification and Verification for Concurrent Systems by TELL
ICOT Research Center, Technical Report TR-068
June 1986
FAGE83a
Fages F. & Huet G.P.
Complete Sets Of Unifiers And Matches In Equational Theories
Proc. 8th Colloquium on Trees In Algebra And Programming
Springer Verlag, LNCS 159, pp 205-220, 1983
FAHL83a
Fahlman S.E. & Hinton G.E. & Sejnowski T.J.
Massively Parallel Architectures for AI: NETL,THISTLE,and Boltzmann Machines
Proc. National Conf. on Artificial Intelligence, Aug 1983 p109-113
FAIR82a
Fairburn J.
Ponder, And Its Type System
Cambridge Computer Lab Technical Report 31, 1982
FAIR85a *
Fairbairn J.
Design and Implementation of a Simple Typed Language Based on the Lambda
Calculus
Computer Laboratory, University of Cambridge, Tech Rep no 75
(also submitted as PhD thesis in December 1984)
1985
FARR79a
Farrell E.P. et al
A Concurrent Computer Architecture and Ring Based Implementation
Proc 6th Int. Symp. on Comp. Arch., pp 1-11
April 1979
FAUS83a
Faustini A.A. Mathews S.G. & Yaghi A.G
The pLUCID Programming Manual
University of Warwick Distributed Computing Report No. 4 ,1983
FEHR84a *
Fehr E.
Expressive power of Typed and Type-Free Programming Languages
Theoretical Computer Science 33 (1984) pp 195-238
North Holland
1984
FEHR84b *
Fehr E.
Dokumentation eines PROLOG-Interpreters implementiert in der funktionalen
Sprache BRL
GMD Nr 122
November 1984
FILG82a *
Filgueiras M.
On The Implementation of Control in Logic Programming Languages
Universidade Nova de Lisboa, Tech rep UNL 8/82
1982
FINN85a *
Finn S.
The Simplex Programming Language
Department of Computing Science, University of Stirling
27th March 1985
FOLE?
Foley J.
A Multi-Ring Dataflow Machine
PhD Thesis, Dept of Computer Science, Univ. of Manchester
In Preparation
FOO86a *
Foo N.Y.
Dewey Indexing of Prolog Traces
Computer Journal, Vol 29, no 1, pp 17-19
1986
FREI74a
Freidman D.P.
The Little LISPer
Science Research Associates, Palo Alto
1974
FREI76a *
Freidman D.P. & Wise D.S.
CONS Should Not Evaluate Its Arguments
Proceedings 3rd International Colloquium on Automata Languages and Programming
pp 257-284
Edinburgh University Press, 1976
FREI77a
Freidman D.P. & Wise D.P.
Applicative Multiprogramming
Tech rep no 72, Indiana univ., Bloomington
1977
FREI77b
Freidman D.P. & Wise D.S.
Aspects of Applicative Programming for File Systems
SIGPLAN notices Vol 12 no 3 march 77 pp 41-55
1977
FREI78a
Friedman D.P. Wise D.S.
A Note on Conditional Expressions
CACM 21(11), pp 931-933, November 1978
FREI78b *
Freidman D.P. & Wise D.S.
Functional Combination
Computer Languages, 3, pp 31-35
1978
FREI78c
Freidman D.P. & Wise D.S.
Unbounded Computational Structures
Software, Practise and Experience, 8, pp 407-415
1978
FREI79a *
Freidman D.P. & Wise D.S.
Reference Counting Can Manage The Circular Environments of Mutual Recursion
Information Processing Letters, 8, no 2, pp 921-930
1979
FREI80a
Freidman D.P. & Wise D.S.
An Indeterminate Constructor for Applicative programming
Conf. Record of ACM Symp. on Princ. of Prog. Langs., Las Vegas
1980
FROS85a *
Frost R.A.
Using Semantic Concepts to Characterise Various Knowledge Representation
Formalisms: A Method of Facilitating the Interface of Knowledge Base
System Components
Computer Journal, Vol 28, no 2, pp 112-116
1985
FUJI83a
Fujita M. & Tanaka H. Moto-oka T.
Verification with PROLOG and Temporal Logic
Faculty of Eng. Univ. of Tokyo
FURU83a *
Furukawa K. & Takeuchi A. & Kunifuji S.
Mandala: A Concurrent Prolog Based Knowledge Programming Language System
ICOT Research center, Technical Report TR-029
November 1983
FURU83b *
Furukawa K. & Nakajima R. & Yonezawa A.
Modularization and Abstraction in PROLOG
Document ETL
ICOT Research Center, Technical Report TR-022
( Also in New Generation Computing, Vol 1, No 2, 1983 )
August 1983
FURU83c
Furukawa K.
Mandala: A Knowledge Programming Language on Concurrent Prolog
ICOT Research Center, Technical Memorandum TM-0028
October 1983
FURU84a *
Furukawa K. & Kunifuji S. & Takeuchi A. & Ueda K.
The Conceptual Specification of the Kernel Language Version 1
( Also in Workshop on Implementation of Concurrent Prolog, Rehovot, 1984 )
ICOT Research Center, Technical Report TR-054
March 1984
FURU84a *
Furukawa K. & Takeuchi A. & Kunifuji S. & Yasukawa H. & Ohki M. & Ueda K.
Mandala: A Logic Based Knowledge Programming System
( Also in Second Japanese Swedish Workshop on Logic Programming and
Functional Programming, Uppsala, 1984 )
ICOT Research Center, Technical Report TR-076
August 1984
FUTA85a
Futatsugi K. & Goguen J.A. & Jouannaud J-P & Meseguer J.
Principles of OBJ2
In Proc. 1985 Principles of Programming Languages
1985
GIER80a
Gierz G. & Hofmann K.H. & Keimel K. & Lawson J.D. & Mislove M. & Scott D.S.
A Compendium of Continuous Lattices
Springer Verlag
1980
GLAS84a *
Glaser H. & Hankin C. & Till D.
Principles of Functional Programming
Prentice Hall International, 1984
GLAU78a
Glauert J.R.W.
A Single-Assignment Language for Data Flow Computing
MSc Dissertation, Dept of Comp Sci, Univ. of Manchester, January 1978
GLAU85a *
Glauert J.R.W. & Holt N.P. & Kennaway J.R. & Sleep M.R.
An Active Term Rewrite Model for Parallel Computation
Document, Alvey DACTL group, March 1985
GLAU85b *
Glauert J.R.W. & Holt N.P. & Kennaway J.R. & Sleep M.R.
DACTL Report 3/5
Document, Alvey DACTL group, March 1985
GLAU85c *
Glauert J.R.W. & Holt N.P. & Kennaway J.R. & Reeve M.J. &
Sleep M.R. & Watson I.
DACTL0: A Computational Model and an Associated Compiler Target Language
University of East Anglia
May 1985
GOEB85a
Goebel R.
The Design and Implementation of DLOG, a Prolog-Based Knowledge Representation
System
New Generation Computing, Vol 3, No 4, pp 385-401
1985
GOGU67a
Goguen J.A.
L-Fuzzy Sets
Journal of Mathematical Analysis and Applications
Vol 18 no 1, pp 145-174
1967
GOGU68a
Goguen J.A.
Categories of Fuzzy Sets
Phd Dissertation
Dept. of mathematics, Univ. of california, berkeley
1968
GOGU68b
Goguen J.A.
The Logic of Inexact Concepts
Synthese, Vol 19, pp 325-373
1968-69
GOGU69a
Goguen J.A.
Categories of V-Sets
Bulletin of the American Mathematical Society,
Vol 75, no 3, pp 622-624
1969
GOGU71a
Mathematical Representation of Hierarchically organised Systems
in "Global Systems Dynamics"
(ed. Attinger E. & Karger S.)
Basel, Switzerland
pp 112-128
1971
GOGU72a
Goguen J.A.
Systems and Minimal Realisation
Proc. IEEE Conf. on Decision and Control,
Miami Beach, Florida
pp 42-46
1972
GOGU72b
Goguen J.A.
Minimal Realisation of Machines in Closed Categories
Bulletin of the American Mathematical Society
Vol 78, no 5, pp 777-783
1972
GOGU72c
Goguen J.A.
Hierarchical Inexact Data structures in Artificial Intelligence Problems
Proc. 5th Hawaii Int. Conf. on System Sciences
Honolulu, Hawaii, pp 345-347
1972
GOGU72d
Goguen J.A. & Yacobellis R.H.
The Myhill Functor, Input-Reduced Machines, and Generalised
Krohn-Rhodes Theory
Proc. 5th Princeton Conf. on Information Sciences and Systems
Princeton, New Jersey
pp 574-578
1972
GOGU72e
Goguen J.A.
On Homomorphisms, Simulation, Correctness and Subroutines for
programs and Program schemes
Proc. 13th IEEE Symp. on Switching and Automata Theory
College Park, Maryland
pp 52-60
1972
GOGU73a
Goguen J.A.
Realisation is Universal
mathematical System Theory
Vol 6, no 4, pp 359-374
1973
GOGU73b
Goguen J.A.
System theory concepts in Computer Science
Proc. 6th Hawaii Int. Conf. on Systems Sciences
Honolulu, Hawaii, pp 77-80
1973
GOGU73c
Goguen J.A.
The Fuzzy Tychonoff Theorem
Journal of mathematical Analysis and applications
vol 43, pp 734-742
1973
GOGU73d
Goguen J.A.
Categorical Foundations for general Systems Theory
in "Advances in Cybernetics and Systems research"
(ed. Pichler F. & Trappl R.)
Transcripta Books, London
pp 121-130
1973
GOGU74a
Goguen J.A.
Semantics of Computation
Proc. 1st Int. Symp. on Category Theory Applied to Computation and Control
(1974 American Association for the Advancement of Science, San francisco)
Univ. of massachusetts at Amherst, 1974, pp 234-249
also published in LNCS vol 25, pp 151-163, springer-verlag
1975
GOGU74b
Goguen J.A. & Thatcher J.W.
Initial Algebra Semantics
proc. 15th IEEE Symp. on Switching and Automata
pp 63-77
1974
GOGU74c
Goguen J.A.
Concept Representation in Natural and Artificial languages: Axioms,
extensions and Applications for Fuzzy sets"
Int. Journal of man-Machine Studies
vol 6, pp 513-561
1974
reprinted in "Fuzzy Reasoning and its Applications"
(ed. Mamdani E.H. & Gaines B.R.)
pp 67-115
Academic Press
1981
GOGU74d
Goguen J.A.
On Homomorphisms, Correctness, termination, Unfoldments and
Equivalence of Flow Diagram Programs"
Journal of Computer and System Sciences,
vol 8, no 3, pp 333-365
1974
GOGU74e
Goguen J.A.
Some Comments on Applying Mathematical System Theory
in "Systems Approaches and Environmental Problems"
(ed. Gottinger H.W. & Vandenhoeck & Rupert)
pp 47-67
(Gottingen, Germany)
1974
GOGU75a
Goguen J.A. & Thatcher J.W. & Wagner E.G. & Wright J.B.
Factorisation, Congruences, and the Decomposition of Automata and
Systems
in "Mathematical Foundations of Computer Science"
LNCS Vol 28, pp 33-45, Springer-Verlag
1975
GOGU75b
Goguen J.A.
Objects
International Journal of general systems, vol 1, no 4,
pp 237-243
1975
GOGU75c
Goguen J.A.
Discrete-Time Machines in Closed Monoidal Categories, I,
Journal of Computer and System sciences, Vol 10, No 1, February,
pp 1-43
1975
GOGU75c
Goguen J.A. & Thatcher J.W. & Wagner E.G. & Wright J.B.
Abstract Data types as Initial algebras and the Correctness of
Data Representations
Proc. Conf. on Computer Graphics, Pattern recognition, and Data Structure
(Beverly Hills, California), pp 89-93
1975
GOGU75d
Goguen J.A. & Carlson L.
Axioms for Discrimination Information
IEEE Transactions on Information Theory, Sept '75
pp 572-574
1975
GOGU75e
Goguen J.A.
On Fuzzy Robot Planning
in "Fuzzy Sets and Their Applications to Cognitive and Decision Processes
(ed. Zadeh L.A. & Fu K.S. & Tanaka K. & Shimura M.)
pp 429-448
Academic Press
1975
GOGU75f
Goguen J.A.
Robust Programming Languages and the Principle of Maximum
Meaningfulness
Proc. Milwaukee Symp. on Automatic Computation and Control
(Milwaukee, Wisconsin)
pp 87-90
1975
GOGU75g
Goguen J.A.
Complexity of Hierarchically Organised Systems and the Structure of
Musical Experiences
Int. Journal of General Systems, vol 3, no 4, 1975, pp 237-251
originally in UCLA Comp. Sci. Dept. Quarterly, October 1975, pp 51-88
1975
GOGU76a
Goguen J.A. & Thatcher J.W. & Wagner E.G. & Wright J.B.
Some Fundamentals of Order-Algebraic Semantics
Proc. 5th Int. Symp. on Mathematical Foundations of Computer Sciences
(Gdansk, Poland, 1976)
LNCS vol 46, 1976, pp153-168, Springer-Verlag
1976
GOGU76b
Goguen J.A. & Thatcher J.W. & Wagner E.W. & Wright J.B.
Parallel Realisation of Systems, Using Factorisations and Quotients in
Categories
Journal of Franklin Institute, vol 301, no6, June '76, pp 547-558
1976
GOGU76c
Goguen J.A.
Correctness and Equivalence of Data Types
Proc Symp. on Mathematical Systems Theory (Udine, Italy)
Springer Verlag Lecture Notes
(ed. Marchesini G.)
pp 352-358
1976
GOGU76d
Goguen J.A. & Thatcher J.W. & Wagner E.G. & Wright J.B.
Rational Algebraic Theories and Fixed-point Solutions
Proc. IEEE 17th Symp on Foundations of Computer Science
(Houston, Texas), 1976, pp 147-158
1976
GOGU77a
Goguen J.A. & Thatcher J.W. & Wagner E.G. & Wright J.B.
Initial Algebra Semantics and Continuous Algebras
JACM, vol 24, no 1, January 1977, pp 68-95
1977
GOGU77b
Goguen J.A.
Abstract Errors for Abstract Data Types
in "Formal Descriptions of Programming Concepts"
(ed. E.Neuhold)
North-Holland, 1978, pp 491-522
also in
Proc. IFIP Working Conf. on Formal Description of Programming
Concepts
(ed. Dennis J.)
MIT Press, 1977, pp 21.1-21.32
1977
GOGU77c *
Goguen J.A. & Burstall R.M.
Putting Theories Together to Make Specifications
Proc. 5th Int. Joint Conf. on Artificial Intelligence
(MIT, Cambridge, Massachusetts), 1977, pp 1045-1058
1977
GOGU77d
Goguen J.A. & Meseguer J.
Correctness of Recursive Flow Diagram Programs
Proc. Conf. on Mathematical Foundations of Comp. Sci.
(Tatranska Lomnica, Czechoslovakia)
pp 580-595
1977
GOGU77e
Goguen J.A.
Algebraic Specification Techniques
UCLA Comp. Sci. Dept. Quarterly
Vol 5, no 4
pp 53-58
1977
GOGU78a
Goguen J.A. & Varela F.
The Arithmetic of Closure
Journal of Cybernetics, Vol 8, 1978
also in "Progress in Cybernetics and Systems research, vol 3"
(ed. Trappl R. & Klir G.J. & Ricciardi L.)
Hemisphere Pub Co. (Washington D.C.)
1978
GOGU78b
Goguen J.A. & Ginali S.
A Categorical Approach to General Systems
in "Applied General Systems research"
(ed. Klir G.)
Plenum Press
pp 257-270
1978
GOGU78c
Goguen J.A. & Thatcher J.W. & Wagner E.G.
An Initial Algebra Approach to the Specification, Correctness and
Implementation of Abstract data Types
in "Current Trends in Programming, vol 4, Data Structuring"
pp 80-149
(ed. Yeh R.)
Prentice Hall
1978
GOGU78d
Goguen J.A.
Some Design Principles and Theory for OBJ-0, a Language for Expressing
and Executing Algebraic Specifications of Programs
Proc. Int. Conf. on Mathematical Studies of Information Processing
(Kyoto, Japan)
pp 429-475
1978
GOGU78e
Goguen J.A. & Linde C.
Structure of Planning Discourse
Journal of Social and Biological Structures, Vol 1
pp 219-251
1978
GOGU79a
Goguen J.A. & Shaket E.
Fuzzy Sets at UCLA
Kybernetes, vol 8
pp 65-66
1979
GOGU79b
Goguen J.A. & Varela F.
Systems and Distinctions; Duality and Complementarity
International Journal of General Systems, vol 5
pp 31-43
1979
GOGU79c
Goguen J.A. & Tardo J.J.
An Introduction to OBJ: A Language for writing and Testing formal
algebraic specifications
Reliable Software Conf. Proc. (ed. Yeh R.)
(Cambridge, Massachusetts)
pp 170-189
Prentice Hall
1979
GOGU79d
Goguen J.A.
Algebraic Specification
in "Research Directions in Software Technology"
(ed. Wegner P.)
pp 370-376
MIT Press
1979
GOGU79e
Goguen J.A.
Some Ideas in Algebraic Semantics
Proc. 3rd IBM Symp on Mathematical Foundations of Computer Science
(Kobe, Japan)
53 pages
1979
GOGU79f
Goguen J.A.
Fuzzy Sets and the Social Nature of Truth
in "Advances in Fuzzy Set Theory and Applications"
(eds. Gupta M.M. & Yager R.)
pp 49-68
North-Holland Press
1979
GOGU79g
Goguen J.A. & Tardo J. & Williamson N. & Zamfir M.
A Practical Method for Testing Algebraic Specifications
UCLA Computer Science Quarterly, Vol 7, no 1
pp 59-80
1979
------------------------------
End of AIList Digest
********************
∂26-Apr-86 0542 LAWS@SRI-AI.ARPA AIList Digest V4 #103
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Apr 86 05:41:57 PST
Date: Fri 25 Apr 1986 23:10-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #103
To: AIList@SRI-AI
AIList Digest Saturday, 26 Apr 1986 Volume 4 : Issue 103
Today's Topics:
Bibliography - References #3
----------------------------------------------------------------------
Date: 9 Apr 86 13:20:13 GMT
From: allegra!mit-eddie!think!harvard!seismo!mcvax!ukc!dcl-cs!nott-cs!
abc@ucbvax.berkeley.edu (Andy Cheese)
Subject: Bibliography - References #3
GOGU80a
Goguen J.A.
Thoughts on Specification, Design and Verification
Software Engineering Notes, Vol 5, no 3
pp 29-33
1980
GOGU80b
How to Prove Algebraic Inductive Hypotheses Without Induction: with
Applications to the Correctness of Data Type Implementation
Proc. 5th Conf. on Automated Deduction, (Les Arcs, France)
(eds. Bibel W. & Kowalski R.)
LNCS, vol 87
pp 356-373
Springer Verlag
1980
GOGU80c
Goguen J.A. & Burstall R.M.
The Semantics of CLEAR, a Specification Language
in "Abstract Software Specification"
(eds Bjorner D.)
(Proc. 1979 Copenhagen Winter School)
LNCS, vol 86
pp294-332
1980
GOGU80d
Goguen J.A. & Linde C.
On the Independence of Discourse Structure and Semantic Domain
Proc. 18th Annual Meeting of the Association for Computational
Linguistics, Parasession on Topics in Interactive Discourse
(Univ. of Pennsylvania, Philadelphia, Pennsylvania)
pp 35-37
1980
GOGU81a
Goguen J.A. & Parsaye-Ghomi K.
Algebraic Denotational Semantics Using Parameterised Abstract Modules
Proc. Int. Conf on Formalising Concepts
(Peniscola, Spain)
(ed. Diaz J. & Ramos I.)
LNCS, vol 107
pp 292-309
Springer verlag
1981
GOGU81b
Goguen J.A. & Burstall R.M.
An Informal Introduction to CLEAR, a Specification Language
in "The Correctness Problem in Computer Science"
(eds. Boyer R. & Moore J.)
pp 185-213
Academic Press
1981
GOGU81c
Goguen J.A. & Meseguer J.
Completeness of many-Sorted Equational Logic
SIGPLAN Notes, Vol 16, no 7, pp 24-32, 1981
also in SIGPLAN Notes, vol 17, no 1, pp 9-17, 1982
extended version as Tech Rep CSLI-84-15, Center for the Study of
Language and Information, Standford Univ.,
September 1984
GOGU82a
Goguen J.A.
ORDINARY Specification of KWIC Index Generation
Proc Workshop on Program Specification
(ed. Staunstrup J.)
LNCS, Vol 134
pp 114-117
Springer Verlag
1982
GOGU82b
Goguen J.A.
ORDINARY Specification of Some Constructions in Plane Geometry
Proc Workshop on Program Specification
(ed. Staunstrup J.)
LNCS, Vol 134
pp 31-46
Springer verlag
1982
GOGU82c
Goguen J.A. & Burstall R.M.
Algebras, Theories and Freeness: An Introduction for Computer Scientists
in "Theoretical Foundations of Programming Methodology"
(eds. Broy M. & Schmidt G.)
pp 329-348
D. Reidel
1982
GOGU82d
Goguen J.A. & Meseguer J.
Security Policies and Security Models
Proc 1982 Berkeley Conf on Computer Security
IEEE Computer Society Press
pp 11-20
1982
GOGU82e
Goguen J.A.
Universal Realisation, Persistent Interconnection and Implementation of
Abstract Modules
Proc 9th Int Colloquium on Automata, Languages and Programming
(Aarhus, denmark)
LNCS, Springer Verlag
1982
GOGU82f
Goguen J.A.
Rapid Prototyping in the OBJ Executable Specification Language
Proc Rapid Prototyping Workshop
(Columbia, Maryland)
1982
also in Software engineering Notes, ACM Special Interest
Group on Software engineering, vol 7, no 5, pp 75-84, 1983
GOGU83a
Goguen J.A. & Meseguer J. & Plaisted D.
Programming with Parameterised Abstract Objects in OBJ
in "Theory and practise of Software technology"
(eds. Ferrari D. & Bolognani M. & Goguen J.A.)
pp 163-193
North-Holland
1983
GOGU83b
Future Directions for Software Engineering
in "Theory and Practise of Software Technology"
(eds. Ferrari D. & Bolognani M. & Goguen J.A.)
pp 243-244
North-Holland
1983
GOGU83c
Goguen J.A. & Ferrari D. & Bologanani M.
Theory and Practise of Software Technology
North Holland
1983
GOGU83d
Goguen J.A. & Meseguer J.
Correctness of recursive Parallel Non-Deterministic Flow Programs
Journal of Computer and System Sciences, vol 27, no 2
pp 268-290
October 1983
GOGU83e
Goguen J.A.
Parameterised Programming
IEEE TOSE, vol SE-10, no 5, september 1984, pp 528-543
preliminary version in Proc. Workshop on Reusability in Programming,
ITT, pp 138-150
1983
GOGU83f
Goguen J.A. & Linde & Weiner J.
Reasoning and Natural explanation
International Journal of man-Machine Studies, Vol 19
pp 521-559
1983
GOGU83g
Goguen J.A. & Burstall R.M.
Introducing Institutions
Logics of programs
(Carnegie-mellon Univ., Pittsburgh PA, June 1983)
LNCS, vol 164, Springer Verlag
pp 221-256, 1984
GOGU84a
Goguen J.A. & Meseguer J.
Unwinding and Inference Control
1984 Symp on Security and privacy, IEEE, pp 75-86
1984
GOGU84b
Goguen J.A. & Meseguer J.
Equality, types, Modules and generics for Logic Programming
Tech Rep no. CSLI-84-5, Center for the Study of Logic and Information,
Stanford University, March 1984
also in Proc. 2nd int. Logic Programming Conf., Upsala, Sweden,
pp 115-125
1984
GOGU84c
Goguen J.A. & Bustall R.M.
Some Fundamental Properties of Algebraic Theories: A Tool for Semantics
of Computation, Part 1: Comma Categories, Colimits and Theories
Theoretical Computer Science, vol 31, no 2,
pp 175-209
1984
GOGU84d
Goguen J.A. & Burstall R.M.
Some Fundamental properties of Algebraic Theories: A Tool for Semantics
of computation, Part 2: Signed and Abstract theories
Theoretical Computer Science, vol 31, no 3
pp 263-295
1984
GOGU84e
Goguen J.A. & Meseguer J.
Equality, Types, Modules and (Why Not ?) Generics for Logic programming
Journal of Logic programming, vol 1, no 2
pp 179-210
1984
GOGU84f
Goguen J.A. & Murphy M. & Randle R.J. & Tanner T.A. & Frankel R.M. &
Linde C.
A Full Mission Simulator study of Aircrew performance: The measurement
of Crew Coordination and descisionmaking factors and their relationships
to Flight task performance
Proc. 20th Annual Conf on Manual control, vol II
(eds. Hartzell E.J. & Hart S.)
NASA Conference publication 2341, pp 249-262
1984
GOGU84g
Goguen J.A. & Linde C. & Murphy M.
Crew Communication as a factor in Aviation Accidents
Proc 20th Annual Conf on Manual control, vol II
(eds. Hartzell E.J. & Hart S.)
NASA Conference Publication 2341, pp 217-248
1984
GOGU85a
Goguen J.A. Meseguer J.
EQLOG: Equality, Types and Generic Modules for Logic Programming
In Functional and Logic Programming, Prentice Hall
1985
GOGU85b
Goguen J.A. & Jouannaud J-P & Meseguer J.
Operational Semantics for Order-Sorted Algebra
In Proc. ICALP 1985
GOGU85c
Goguen J.A. & Meseguer J.
Initiality, Induction and Computability
to appear in "Algebraic Methods in Semantics"
(ed. Nivat M. & Reynolds J. )
Cambridge U.P.
chapter 14, pp 459-540 approx.
1985
GOGU85d
Goguen J.A. & Meseguer J.
Completeness of Many-Sorted Equational Logic
to appear in Houston Journal of Mathematics
1985
GOGU85e *
Goguen J.A. & Futatsugi K. & Jouannaud J.-P. & Meseguer J.
Principles of OBJ2
Proc 1985 Symp on Principles of programming languages, ACM
pp 52-66
1985
GOLD81a
Goldfarb W.
The Undecidability Of The Second Order Unification Problem
Theoretical Computer Science 13, pp 225-230, 1981
GOOD83a
Goodall A.
Language Of Intelligence (PROLOG)
Systems International p21-24 Jan 1983
GOOD85a
Good D.I.
Mechanical Proofs about Computer Programs
in HOA85a
1985
GORD79a *
Gordon M.J. & Milner R. & Wadsworth C.P.
Edinburgh LCF
Lecture Notes In Computer Science, Vol 78
Berlin: Springer Verlag, 1979
GORD85a *
Gordon M.
HOL : A Machine Oriented Formulation of Higher order Logic
Computer Laboratory, University of Cambridge
Technical Report no 68
July 16 1985
GOST79a
Gostelow K.P. & Thomas R.E.
A View of Dataflow
Proc. Nat. Comp. Conf., Vol 48, pp 629-636
1979
GOTO82a
Goto A. & Moto-oka T.
Basic Architecture of Highly Parallel Processing System for Inference
Document Univ. of Tokyo, Dec 1982
GREE85a *
Greene K.J.
A Fully Lazy Higher Order Purely Functional Programming Language with
Reduction Semantics
CASE Center Technical Report No. 8503
CASE Center, Syracuse University, New York
December 1985
GREG85a *
Gregory S.
Design, Application and Implementation of a Parallel Programming Language
PhD Thesis, Dept of Computing, Imperial College, Univ of London
September 1985
GRIE77a
Gries D.
An Exercise in Proving Parallel Programs Correct
CACM, 20, no 12, pp 921-930
1977
GRIS71a
Griswold R.E. & Poage J.F. & Polonsky J.P.
The Snobol-4 Programming Language
Prentice Hall
1971
GRIS84a *
Griswold R.E.
Expression Evaluation in the Icon Programming Language
Proceedings of 1984 ACM Symposium on Lisp and Functional Programming
Austin, Texas
pp 177-183
1984
GUES76a *
Guessarian I.
Semantic Equivalence of Program Schemes and its Syntactic Characterization
Proceedings 3rd International Colloquium on Automata Languages and Programming
pp 189-200
Edinburgh University Press, 1976
GUNN84a *
Gunn H.I.E. & Harland D.M.
Polymorphic Programming II. An Orthogonal Tagged High Level Architecture
Abstract Machine
Software - Practise and Experience, Vol 14(11), pp 1021-1046
November 1984
GURD78a *
Gurd J. & Watson I. & Glauert J.
A Multi-Layered Data Flow Computer Architecture
Internal Report, Dept of Comp Sci, Univ of Manchester
1978
GURD85a *
Gurd J. & Kirkham C.C. & Watson I.
The Manchester Prototype Dataflow Computer
CACM, vol 28, p 34-52,
1985
GUTT75a
Guttag J.V.
The Specification and Application to programming of Abstract Data Types
PhD dissertation, Univ. of Toronto, Dept of Comp Sci
1975
GUTT77a
Guttag J.V.
Abstract Data Types and the Development of Data Structures
CACM Vol 20, no 6, pp 396-404, June
1977
GUTT78a
Guttag J.V. & Horowitz E. & Musser D.R.
Abstract Data Types and Software Validation
CACM Vol 21, pp 1048-1064, december
also USC Information Sciences Institute Tech. Rep. Aug 76
1978
GUTT78b
Guttag J.V. & Horning J.J.
The Algebraic Specification of Abstract Data Types
Acta Informatica, 10, 1, pp 27-52
1978
GUTT80a
Guttag J.V.
Notes on Type Abstraction (version 2)
IEEE Trans. on Soft. Eng. Vol SE-6, no 1, pp 13-23, January
1980
GUTT82a
Guttag J.
Notes On Using Types and Type Abstraction In Functional Programming
in DARL82a
1982
GUZM81a *
Guzman A.
A heterarchical Multi-Microprocessor Lisp Machine
1981 IEEE Computer Society Workshop on Computer Architecture for Pattern
Analysis and Image Database Management, Hot Springs, Virginia
pp 309 - 317
November 11-13, 1981
HALI84a *
Halim Z.
A Data-/Demand-Driven Model for the Evaluation of PARLOG And-Relations
and Conditional Equations
Document, Dept of Computer Science, Univ. of Manchester Jan 1984
HAMI85a *
Hamilton A.G.
Program Construction in Martin-Lof Type Theory
T.R. 24
Tech Rep, Dept of Comp Sci, Univ of Stirling
June 1985
HAMM83a
Hammond P. & Sergot M.
A Prolog Shell for Logic-Based Expert Systens
Proc. 3rd BCS Expert Systems Conf. pp 95-104,
1983
HAMM83b
Hammond P.
Representation of DHSS Regulations as a Logic Program
B.C.S. Expert Systems Conference 1983
HAMM84a
Hammond K.
The KRC Manual
CSA/16/1984, DSAG-3,
Univ. of East Anglia, May 1984.
HANK85a *
Hankin C.L. & Osmon P.E. & Shute M.J.
COBWEB - A Family of Fifth Generation Computer Architectures
25th January 1985
HANS79a
Hansson A. & Tarnlund S. -A.
A Natural Programming Calculus
Proc. 6th IJCAI, Tokyo, Japan, pp 348-355, 1979
HARL84a *
Harland D.M.
Polymorphic Programming Languages
Ellis Horwood 1984
HARR81a *
Harrison P.G.
Efficient Storage Management for Functional Languages
Dept of Computing, Imperial College, Research Report no DOC 81/12
August 1981
HASE84a
Hasegawa R.
A List Processing Orientated Data Flow Machine Architecture
Electrical Communication Lab, Nippon Telegraph and Telephone
Public Corporation
HATT83a
Hattori T. & Yokoi T.
Basic Constructs of the SIM Operating System
( Also in New Generation Computing, Vol 1, No 1, 1983 )
ICOT Research Center, Technical Memorandum TM-0009
June 1986
HATT84a *
Hattori T. & Tsuji J. & Yokoi T.
SIMPOS: An Operating System for a Personal Prolog Machine PSI
ICOT Research Center, Technical Report TR-055
April 1984
HATT84b *
Hattori T. & Yokoi T.
The Concepts and Facilities of SIMPOS Supervisor
ICOT Research Center, Technical Report TR-056
April 1984
HATT84c
Hattori T. & Yokoi T.
The Concepts and Facilities of SIMPOS File System
ICOT Research Center, Technical Report TR-059
April 1984
HAYE84a
Hayes P.J.
Entity-Oriented Parsing
CMU-CS-84-138
Dept of Comp Sci, Carnegie-Mellon Univ.
9 June 1984
HEND76a *
Henderson P. & Morris J.M.
A Lazy Evaluator
Proceedings 3rd POPL Symposium, Atlanta Georgia, 1976, pp 95-103
HEND80a *
Henderson P.
Functional Programming: Application and Implementation
Prentice Hall 1980
HEND82a
Henderson P.
Purely Functional Operating Systems
in DARL82a
1982
HEND83a *
Henderson P. & Jones G.A. & Jones S.B.
The Lispkit Manual, vol 1 and vol 2 (sources)
Oxford University Programming Research Group
Technical Monograph PRG-32(i) and PRG-32(ii)
1983
HEND84a *
Henderson P.
Specifications and Programs
FPN-5
Dept of Comp Sci, Univ. of Stirling
Paper presented at Centre for Software Reliability Workshop,
City University, April1984
to be published in "Software; Requirements, Specification and Testing"
(ed Dr. T. Anderson) pub. Blackwell Scientific Publications
July 1984
HEND84b
Henderson P.
Some Distributed Systems
FPN-6
Dept of Comp Sci, Univ. of stirling
July 1984
HEND84c *
Henderson P.
Process Combinators
FPN-7
Dept of Comp Sci, Univ of Stirling
August 1984
HEND84d *
Henderson P.
Communcating Functional Programs
FPN-8
Dept of Comp Sci, Univ of Stirling
September 1984
HEND84e *
Henderson P.
Me Too - A Language for Software Specification and Model Building -
Preliminary Report
FPN-9
First Draft : October 1984
Second Draft : December 1984
1984
HENN76a *
Hennessy M. & Ashcroft E.A.
The Semantics of Nondeterminism
Proceedings 3rd International Colloquium on Automata Languages and Programming
pp 478 - 493
Edinburgh University Press, 1976
HEWI73a
Hewitt C. et al.
Actor Induction and Meta-Evaluation
1st ACM Symposium on Principles of Programming Languages
1973
HEWI77a
Hewitt C.
Viewing Control Structures as Patterns of Passing Messages
AI Journal 8, no 3, pp 323-364, 1977
HEWI79a
Hewitt C.
Control Structure as Patterns of Passing Messages
Artificial Intelligence: An MIT Perspective ,The MIT Press p433-465 1979
HEWI80a *
Hewitt C.
The Apiary Network Architecture for Knowledgeable Systems
Proc. 1980 LISP Conf. p107-117
HIKI83a
Hikita T.
Average Size of Turner's Translation to Combinator Program
ICOT research Center, Technical Report TR-017
August 1983
HINDI84a
Hindin Harvey J.
Fifth-Generation Computing: Dedicated Software is The Key
Computer Design, Sept 1984, page 150
HINDL69a
Hindley R.
The Principal Type Scheme of an Object in Combinatory Logic
Trans. American Mathematical Society 146, pp 29-60
1969
HINDL83a *
Hindley R.
The Completeness Theorem For Typing Lambda-Terms
Theoretical Computer Science 22, pp 1-17
North Holland
January 1983
HIRA83a
Chart Parsing in Concurrent Prolog
ICOT research Center, Technical Report TR-007
May 1983
HOAR82a
Structure of an Operating System
in BROY82a, pp 643-658
1982
HOAR85a
eds. Hoare C.A.R. & Sheperdson J.C.
Mathematical Logic and Programming Languages
Prentice Hall International Series in Computer Science,
1985.
First published in the Philosophical Transactions of the Royal Society,
Series A, Volume 3/12, 1984.
HOAR85b
Hoare C.A.R.
Programs are Predicates
in HOA85a
1985
HOCK81a *
Hockney R.W. & Jesshope C.R.
Parallel Computers
Adam Hilger Ltd., Bristol
1981
HOFF82a *
Hoffmann C.M. & O'Donnell M.J.
Programming With Equations
ACM Transactions on Programming Languages and Systems, Vol 4, No 1
pp 83-112
January 1982
HOFF83a
Hoffmann C.M. & O'Donnell M.
Implementation of an interpreter for abstract equations
ACM Conference on Computer Science
1983
HOGG78a
Hogger C.J.
Program Synthesis in Predicate Logic
Proc. AISB/GI Conf. on Artif. Intell,
Hamburg, pp 18-20
1978
HOGG78b
Hogger C.J.
Goal Oriented Derivation of Logic Programs
Proc. MFCS Conf.,
Polish Acadamy of Sciences, Zakopane, pp 267-276
1978
HOGG81a
Hogger C.J.
Derivation of Logic Programs
J. Ass. Comput. Mach. 28, pp 372-422
1981
HOGG84a
Hogger C.J.
Introduction to Logic Programming
Academic Press
1984
HOLL80a
Holloway J. & Steele G.L.Jr. & Sussman G.J. & Bell A.
The Scheme-79 Chip
AI Memo 559, MIT Lab, Cambridge, 1980
HOLL80b
Holloway J. & Steel G. & Sussman G.J. & Bell A.
The Scheme 79 Chip
Proceedings LISP Conference, Stanford, 1980
HOLT86a *
Holt N.
Parallel Processing For Fifth Generation Systems
in BCS86a
1986
HOMM80a *
Hommes F. & Kluge W. & Schlutter H.
A Reduction Machine Architecture and Expression Oriented Editing
GMD ISF 80.04
1980
HOPK79a
Hopkins R.P. et al
A Computer Supporting Data Flow, Control Flow and Updatable Memory
Computing Laboratory, Univ of Newcastle upon Tyne
Tech Rep 144
1979
HORA85a *
Horacek H.
Semantic/Pragmatic Representation Language
Forschungsstelle fur Informationswissenschaft und Kunstiliche Intelligenz
Universitat Hamburg
LOKI Report NLI - 2.1
December 1985
HSIA83a *
Hsiang J. & Dershowitz N.
Rewrite Methods For Clausal and Non-Clausal Theorem Proving
10th EATCS International Colloquium on Automata, Languages and Programming
pp 331-346
1983
HSIA84a
Hsiao D.K.
Advanced Database Machine Architectures
Prentice Hall 1984
HUD81a
Hudak P.
Call-Graph Reclamation: An Alternative Storage Reclamation Scheme
AMPS Technical Memorandum #4
August 1981
HUD81b
Hudak P.
Real-Time Mark-Scan Garbage Collection on a Distributed Applicative
Processing System
AMPS Technical Memorandum #5
October 1981
HUD84a
Hudak P. & Kranz D.
A Combinator Based Compiler For a Functional Language
11th Symposium on Principles of Programming Languages
pp 122-132
1984
HUDA84b
Hudak P. & Keller R.M.
Garbage Collection and Task Deletion in Distributed Applicative
Processing System
Proc. Conf. on LISP and Functional Programming, ACM,
August 1984
HUDA84c *
Hudak P.
ALFL Reference Manual and Programmers Guide
Dept of Computer Science, University of Yale, Technical Report YALEU/DCS/TR-322
Second Edition
October 1984
HUDA84d *
Hudak P.
Distributed Applicative Processing Systems : Project Goals, Motivation,
and Status Report
Dept of Computer Science, University of Yale, Technical Report YALEU/DCS/TR-317
May 1, 1984
HUDA85a *
Hudak P. & Goldberg B.
Distributed Execution of Functional Programs Using Serial Combinators
IEEE Transactions on Computers, Vol c-34, no 10, pp 881-891
October 1985
HUDA85b *
Hudak P. & Guzman J.C.
A Proof-Stream Semantics for Lazy Narrowing
Dept of Computer Science, University of Yale, Research Report YALEU/DCS/RR-446
December 1985
HUDA85c *
Hudak P. & Young J.
Higher-Order Strictness Analysis in Untyped Lambda Calculus
Dept of Comp Sci, Univ of Yale
October 1985
HUDA85d *
Hudak P. & Smith L.
Para-Functional Programming: A Paradigm for Programming Multiprocessor Systems
Dept of Comp Sci, Univ of Yale
October 1985
HUDA85e *
Hudak P.
Functional Programming on Multiprocessor Architectures
Dept of Comp Sci, Univ of Yale, Research Report YALEU/DCS/RR-447
December 1985
HUET73a
Huet G.P.
The Undecidability of Unification in Third Order Logic
Information and Control 22, pp 257-267
1973
HUET75a
Huet G.P.
Unification in the Typed Lambda Calculus
Proc. Symposium on the Lambda Calculus and Computer Science Theory,
Springer Verlag, LNCS 37, pp 192-212
1975
HUET80a
Huet G.P. & Oppen D.
Equations and Rewrite Rules: a Survey
Report CSL-111, SRI International
1980
HUGH82a *
Hughes R.J.M.
Super-Combinators:a new Implementation Method for Applicative Languages
Proc. ACM Symposium on LISP and Functional Languages (Aug 1982) p1-10
HUGH82b *
Hughes R.J.M.
Graph Reduction with Super-Combinators
Oxford University Programming Research Group Technical Monograph PRG-28
June 1982
HUGH83a *
Hughes R.J.M.
The Design and Implementation of Programming languages
Oxford University Programming Research Group Technical Monograph PRG-40
(published as monograph september 84)
July 1983
HUGH84a *
Hughes R.J.M.
Reference Counting with Circular Structures in Virtual Memory Applicative
Systems
Programming Research Group, Oxford University 1984
HUGH84b
Hughes R.J.M.
Parallel Functional Programs Use Less Space
Programming Research Group, Oxford University
1984
HUGH84c
Hughes G.E. & Cresswell M.J.
A Companion to Modal Logic
Methuen 1984
HUTC86a *
Hutchinson A.
A Data Structure and Algorithm for a Self-Augmenting Heuristic Program
Computer Journal, Vol 29, No 2, pp 135-150
April 1986
HWAN84a
Hwang K.
Computer Architecture and Parallel Processing
McGraw Hill 1984
------------------------------
End of AIList Digest
********************
∂28-Apr-86 1309 LAWS@SRI-AI.ARPA AIList Digest V4 #104
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 Apr 86 13:09:01 PDT
Date: Mon 28 Apr 1986 09:57-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #104
To: AIList@SRI-AI
AIList Digest Monday, 28 Apr 1986 Volume 4 : Issue 104
Today's Topics:
Seminars - Learning Representation by Backpropagation (GTE) &
Distributed Object-Oriented Programming (CMU) &
Prolog and Geometry (CSLI) &
Inferring Domain Plans in Question Answering (UPenn) &
Possible Worlds Planning (UPenn) &
Parallel Algorithms for Term Matching (MIT),
Conference - 19TH Hawaii Int. Conf. on Systems Sciences &
1st Australian AI Congress (Extended Deadline)
----------------------------------------------------------------------
Date: Thu, 24 Apr 86 10:08:52 EST
From: Bernard Silver <SILVER@AI.AI.MIT.EDU>
Subject: Seminar - Learning Representation by Backpropagation (GTE)
GTE Laboratories Inc
Machine Learning Seminar Series
Speaker: David E. Rumelhart
Institute of Cognitive Science
and
University of California San Diego
Title: Learning Representation by Backpropagation
Date: Monday April 28 9am
Place: GTE Laboratories
40 Sylvan Rd
Waltham MA 02254
Recent work will be presented on using the backpropagation learning
procedure for developing internal representations in parallel
distributed processing systems. The talk will include a brief
introduction to the backpropagation learning procedure followed by a
report on a number of new applications of the procedure to storage of
semantic information and the discovery of new features.
Visitors welcome: Contact Oliver Selfridge (617) 466-2855
or Bernard Silver (617) 466-2663
------------------------------
Date: 23 Apr 86 11:49:56 EST
From: David.Anderson@SPICE.CS.CMU.EDU
Subject: Seminar - Distributed Object-Oriented Programming (CMU)
Thesis Proposal:
Object Oriented Programming for Distributed Systems
David B. Anderson
Computer Science Department
Carnegie-Mellon University
dba@k.cs.cmu.edu
28 April 1986
3:30 pm
WeH 5409
ABSTRACT
Object oriented programming has often been advocated for a variety of
programming tasks, particularly interactive, graphical applications and
window managers. Software engineers are attracted to this
programming methodology because of the modularity, data abstraction and
information hiding that it promotes. On the other hand, object oriented
techniques have not generally been used in building distributed
systems and applications.
The difficulty in using object oriented programming techniques for
implementing distributed applications lies in the requirements that
object oriented languages and systems place on their runtime
environment. For example, the remote procedure call mechanisms
typically used in building distributed applications must be replaced with
a mechanism for remote method invocation. This means that a static
remote procedure call stub generator, such as Matchmaker, must be replaced
with a mechanism for dynamically locating the correct method to call
based on the runtime types of objects. Furthermore, mechanisms are needed
to allow objects, classes and methods to be created and destroyed as
the system is running. Other difficulties and issues that must be
addressed include the naming and scope of objects, garbage collection,
error recovery and protection.
The proposed dissertation research will develop a solution to these
problems in the form of an object manager for distributed systems.
This proposal looks at these issues in some detail, and discusses the
design of an object manager to meet these requirements. A prototype
system is planned, and will be used to implement a distributed,
object oriented user interface environment.
------------------------------
Date: Fri 25 Apr 86 19:31:48-PST
From: Fred Lakin <LAKIN@SU-CSLI.ARPA>
Subject: Seminar - Prolog and Geometry (CSLI)
Pixels and Predicates meeting: note ==> TUESDAY <==
PROLOG AND GEOMETRY
Who: Randolph Franklin, UC at Berkeley
wrf@degas.berkeley.edu
Where: CSLI trailers
When: 1:00pm - TUESDAY, April 29, 1986
Abstract:
The Prolog language is a useful tool for geometric and graphics
implementations because its primitives, such as unification,
match the requirements of many geometric algorithms. We have im-
plemented several problems in Prolog including a subset of the
Graphics Kernal Standard, convex hull finding, planar graph
traversal, recognizing groupings of objects, and boolean combina-
tions of polygons using multiple precision rational numbers.
Certain paradigms, or standard forms, of geometric programming in
Prolog are becoming evident. They include applying a function to
every element of a set, executing a procedure so long as a cer-
tain geometric pattern exists, and using unification to propagate
a transitive function. Certain strengths and weaknesses of Pro-
log for these applications are now apparent.
------------------------------
Date: Sun, 27 Apr 86 21:36 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Inferring Domain Plans in Question Answering (UPenn)
Forwarded From: Bonnie Webber <Bonnie@UPenn> on Sun 27 Apr 1986 at 9:40
INFERRING DOMAIN PLANS IN QUESTION-ANSWERING
Martha E. Pollack
Thesis Defense
Monday, April 28, 1986
Noon-2pm, 216 Moore
The importance of plan inference (PI) in models of conversation has
been widely noted in the computational-linguistics literature, and its
incorporation in question-answering systems has enabled a range of
cooperative behaviors. The PI process in each of these systems,
however, has assumed that the questioner (Q) whose plan is being
inferred and the respondent (R) who is drawing the inference have
identical beliefs about the actions in the domain. I demonstrate that
this assumption is too strong, and often results in failure not only
of the PI process, but also of the communicative process that PI is
meant to support. In particular, it precludes the principled
generation of appropriate responses to queries that arise from invalid
plans. I present a model of PI in conversation that distinguishes
between the beliefs of the questioner and the beliefs of the
respondent. This model rests on an account of plans as mental
phenomena: "having a plan" is analyzed as having a particular
configuration of beliefs and intentions. Judgements that a plan is
invalid are associated with particular discrepancies between the
beliefs that R ascribes to Q, when R believes Q has some particular
plan, and the beliefs R herself holds. I define several types of
invalidities from which a plan may suffer, relating each to a
particular type of belief discrepancy, and show that the types of any
invalidities judged to be present in the plan underlying a query can
affect the content of a cooperative response. The PI model has been
implemented in SPIRIT -- a System for Plan Inference that Reasons
about Invalidities Too -- which reasons about plans underlying queries
in the domain of computer mail.
Advisor: Bonnie Webber
Committee: Aravind Joshi, chair
Tim Finin
Barbara Grosz
------------------------------
Date: Sun, 27 Apr 86 23:35 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Possible Worlds Planning (UPenn)
Forwarded From: Bonnie Webber <Bonnie@UPenn> on Thu 24 Apr 1986 at 14:24
In addition to his talk on Tuesday afternoon, 29 April, on Multi-Valued
Logic, Matt Ginsberg will also give a talk on Wednesday morning at
10:30 on Possible Worlds Planning.
Room to be announced.
POSSIBLE WORLDS PLANNING
Matt Ginsberg
Stanford University
The size of the search space is perhaps the most intractable
of all of the problems facing a general-purpose planner. Some
planning methods (means-ends analysis being typical) address this
problem by encouraging the system designer to give the planner domain-specific
information (perhaps in the form of a difference table) to help govern
this search.
This paper presents a domain-independent approach to this problem
based on the examination of possible worlds in which the planning
goal has been achieved. Although a weak method, the ideas presented
lead to considerable savings in many examples; in addition, the natural
implementation of this approach has the attractive property that
incremental efforts in controlling the search provide incremental
improvements in performance. This is in contract to many other
approaches to the control of search or inference, which may require
large expenditures of effort before any benefits are realized.
------------------------------
Date: Thu 24 Apr 86 14:29:00-EST
From: Lisa F. Melcher <LISA@XX.LCS.MIT.EDU>
Subject: Seminar - Parallel Algorithms for Term Matching (MIT)
DATE: Thursday, May 1, 1986
TIME: 3:45 - Refreshments
4:00 - Lecture
PLACE: NE43 - 512A
"PARALLEL ALGORITHMS FOR TERM MATCHING"
CYNTHIA DWORK
IBM Almaden Research Center
San Jose, CA
Unification of terms is a well known problem with applications to a variety
of symbolic computation problems. Two terms s and t, involving function
symbols and variables, are unifiable if there is a substitution for the
variables which makes s and t syntactically identical. For example, f(x,x)
and f(g(y),g(g(c))) are unified by substituting g(c) for y and g(g(c)) for
x. A special case of unification is term matching where one of the terms
contains no variables. Previous work on parallel algorithms for unification
by Dwork, Kanellakis and Mitchell (DKM) showed that unification is P-complete
in general, even if terms are represented as trees so that common
subexpressions must be repeated. However, DKM give an NC2 algorithm for term
2
matching using M(n ) processors where M(m) is the number of operations needed
to multiply m-by-m matrices. This algorithm allows a compact dag
representation of terms. These resuts have been tightened in two ways.
First, the processor bound for term matching of dags has been improved to
2
M(n), while retaining the O(log n) running time, using a randomizing
algorithm. There is also some evidence that improving the processor bound
further will be difficult since there is an efficient parallel reduction from
the graph accessibility problem (GAP) to the term matching problem for dags,
2
so that any improvement in the processor bound for term matching (say, to n )
would imply the same for GAP. The second improvement is a sharper
P-completeness result which shows that unification of tree terms is
P-complete even for linear terms where each variable can appear at most once
in each term.
This is joint work with Paris Kanellakis and Larry Stockmeyer.
Shafi Goldwasser
Host
------------------------------
Date: 25 Apr 1986 10:09:56-EST
From: Vasant.Dhar@ISL1.RI.CMU.EDU
Subject: Conference - 19TH Hawaii Int. Conf. on Systems Sciences
CALL FOR PAPERS: 19TH HAWAII INTERNATIONAL CONFERENCE ON SYSTEMS SCIENCES
(HICSS), Hawaii, January 1987.
Papers are invited for the KNOWLEDGE-BASED/DECISION-SUPPORT SYSTEMS track.
The following are representative areas:
1. Knowledge-Based approaches to large systems development
2. Knowledge-Based support systems in business organizations
3. Knowledge Engineering in Management Science Applications
4. Knowledge Engineering in Database Management/Intelligent Retrieval Systems
5. Decision Support Systems for Group decision making
6. Model Management in Decision Support Systems
7. User Interfaces in Decision Support Systems
Papers falling into area 1 above should be sent to:
Vasant Dhar
Department of Information Systems
New York University
90 Trinity Place
New York, NY 10006.
Papers in all other areas should be sent to Edward Stohr at the same
address. The deadline for submission is July 7 1986. Authors will be notified
of acceptance before September 8, 1986. Camera ready copies are due on
October 20, 1986. The conference is on the island of Oahu, January 6-9, 1987.
------------------------------
Date: 23 Apr 86 13:26:07 +1000 (Wed)
From: "ERIC Y.H. TSUI" <decvax!mulga!aragorn.oz!eric@decwrl.DEC.COM>
Subject: Conference - 1st Australian AI Congress (Extended Deadline)
1
11 st
111 AUSTRALIAN
11 ARTIFICIAL
11 INTELLIGENCE
11 CONGRESS
11
1111 Melbourne, November 18-20, 1986
CALL FOR PAPERS
========================================================================
DEADLINE EXTENDED...DEADLINE EXTENDED...DEADLINE EXTENDED...DEADLINE EXT
========================================================================
Abstract (300 words) of papers to be selected for presentation
to the 1st Australian Artificial Intelligence Congress are now invited.
The three-part program comprises:
i) AI in Education
- Intelligent tutors
- Computer-managed learning
- Course developers environment
- Learning models
- Course authoring software
ii) Expert System Applications
- Deductive databases
- Conceptual schema
- Expert system shells (applications and limitations)
- Interactive knowledge base systems
- Knowledge engineering environments
- Automated knowledge acquisition
iii) Office Knowledge Bases
- Document classification and retrieval
- Publishing systems
- Knowledge source systems
- Decision support systems
- Office information systems
Tutorial presenters are also sought. Specialists are required
in the areas of:
- Common loops
- Natural language processing
- Inference engines
- Building knowledge databases
- Search strategies
- Heuristics for AI solving
Reply to:
ACSnet address: brian!aragorn.oz
CSNET address: brian@aragorn.oz
UUCP address: seismo!munnari!aragorn.oz!brian
decvax!mulga!aragorn.oz!brian
ARPA address: munnari!aragorn.oz!brian@seismo.arpa
decvax!mulga!aragorn.oz!brian@Berkeley
or post to: Dr. B. Garner, Division of Computing and Mathematics,
Deakin University, Victoria 3217, Australia.
NEW DEADLINES: All submissions by May 31, 1986. Notification by July 14.
↑↑↑ ↑↑↑↑↑↑ ↑↑↑↑↑↑↑
||| |||||| |||||||
Inquiries: Stephen Moore, Director, 1AAIC86, tel: (02)439-5133.
Eric Tsui eric@aragorn.oz
------------------------------
End of AIList Digest
********************
∂29-Apr-86 0115 LAWS@SRI-AI.ARPA AIList Digest V4 #105
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 Apr 86 01:15:28 PDT
Date: Mon 28 Apr 1986 22:33-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #105
To: AIList@SRI-AI
AIList Digest Tuesday, 29 Apr 1986 Volume 4 : Issue 105
Today's Topics:
Queries - Rapid Prototyping and Exploratory Programming &
Animal Behavior Simulation & Expert System in Simulation &
Prolog from Simtel20 & Survey of IBM-PC Expert Systems,
AI Tools - Expert System Software for MS-DOS,
Conference - Long Beach AI Conference,
Reports - Sources,
Networks - New Net Address Syntax,
Law & Linguistics - Trademarks
----------------------------------------------------------------------
Date: 24 Apr 86 13:29:29 GMT
From: ucdavis!lll-lcc!lll-crg!caip!seismo!umcp-cs!aplcen!jhunix!ins←amrh
@ucbvax.berkeley.edu (Martin R. Hall)
Subject: Rapid Prototyping & Exploratory Programming
The division here is building a Software Engineering Practices Manual,
and has been debating how to relate standards for conventional software
to Knowledge Based and other AI systems.
Could anyone point us to some articles that directly reference the ideas
of rapid prototyping and exploratory programming; pre-design and
mid-design experimental programming as a methodology in building
applied AI systems.
Thanks!
-Marty Hall
Arpa (preferred) hall@hopkins
CSnet hall.hopkins@csnet-relay
UUCP seismo!umcp-cs!jhunix!ins←amrh
allegra!hopkins!jhunix!ins←amrh
AT&T (301) 682-0917
------------------------------
Date: 24 Apr 86 21:50:09 GMT
From: ucdavis!lll-lcc!lll-crg!seismo!ll-xn!mit-amt!bc@ucbvax.berkeley.edu
(William H Coderre)
Subject: Query: Animal Behavior Simulation using rules
I am doing my bachelor's thesis here at MIT on simulating animal behavior
using rule-driven systems.
The aim is to develop a package that grade- and high-school students will
use to investigate behavior, similar to the commercial packages RobotWars,
ChipWits, and Rocky's Boots.
Does anyone care to recommend references that might be helpful?
Please reply direct to me and I will post a complete list as the demand
warrants.
"Biology of purpose keeps my nose above the surface"....................bc
------------------------------
Date: Sun, 27 Apr 86 14:20:45 est
From: munnari!csadfa.cs.adfa.oz!gyp@seismo.CSS.GOV (Patrick Tang)
Subject: Expert System in Simulation Model
Does anyone out there ever come across literature which
discribe the design and implementation of "Artificial
Intelligence" or "Expert System" in military simulation
model, in particular Army Wargaming Simulation.
I would appreciate it very much if you could let me
know where I could get hold of those material. (Unclassified
one, Of Course !!).
Thanks a million.
Tang Guan Yaw/PatricK ISD: +61 62 68 8170
Dept. Computer Science STD: (062) 68 8170
University College ACSNET: gyp@csadfa.oz
Uni. New South Wales UUCP: ...!seismo!munnari!csadfa.oz!gyp or
Aust. Defence Force Academy ...!{decvax,pesnta,vax135}!mulga!csadfa.oz!gyp
Canberra. ACT. 2600. ARPA: gyp%csadfa.oz@SEISMO.ARPA
AUSTRALIA CSNET: gyp@csadfa.oz
------------------------------
Date: 24 Apr 86 20:26:27 GMT
From: cbosgd!oucs!joe@ucbvax.berkeley.edu (Joseph Judge)
Subject: Prolog from Simtel20
I remember seeing a posting recently about prolog available from
Simtel20 thru an FTP. As I cannot FTP, is it possible to get this prolog
from a kind soul out there in NetLandia ??
From the friendly systems administrator,
Joseph Judge
ihnp4!{amc1,cbdkc1,cbosgd,cuuxb,}!oucs!joe
Nur mit dir.
------------------------------
Date: Thu, 24 Apr 86 17:42:01+0900
From: Sangki Han <skhan%cskaist%kaist.csnet@CSNET-RELAY.ARPA>
Subject: Survey of IBM-PC Expert Systems
We are searching for the commercial expert systems on IBM-PCs.
We want to know the application areas, prices, and vendors. If you send
responses, I'll accumulate and repost them on net.
Thanks in advance.
Sangki Han
Department of Computer Science
KAIST, P. O. Box, 150
Chongryang, Seoul 131
Korea
skhan%cskaist@kaist (Csnet)
..!seismo!kaist!cskaist!skhan (uucp)
------------------------------
Date: Fri, 25 Apr 86 05:08:15 EST
From: ihnp4!lzaz!psc@seismo.CSS.GOV
Subject: Expert system software for MS-DOS
[Forwarded from the IBMPC bboard by Paul Fishwick <Fishwick@UPenn>
and Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>]
Here's the promised list of expert systems for MS-DOS based personal
computers. The language name in brackets usually indicates what
languages you can can use for special purpose routines that work with
the system. (Sometimes, it may just be the language the expert system
was written in.) I threw in Prolog processors for grins (I suspect
Borland's going to make that a much bigger field!)
The names, addresses, phone numbers, and especially prices are not
guaranteed to be free from typos, line noise, or obsolecence. I have
no experience or further information on any of these packages; don't
call me, call the company. On the other hand, if *you* have used any
of these systems, please drop me a line; I'll be happy to summarize
and repost. I'd also like to hear of any products I'd forgotten, or
any errata to my list.
Aion Development System: expert system, $7000
Aion
101 University Ave., 4th floor
Palo Alto, CA 94301
415-328-9595
Arity Expert System Development Package: expert system, $295
Arity Standard Prolog: AI language, $95
Arity Prolog Interpreter V4: AI language, $350
Arity Prolog Compiler & Interpreter V4: AI language, $795
Arity Corp
358 Baker Ave.
Concord, MA 01742
617-371-1243
OPS5+: expert system [C]
Artelligence, Inc.
14902 Preston Rd., suite 212-252
Dallas, TX 75240
214-437-0361
A.D.A Educational Prolog: AI language, $29.95
VML Prolog: AI language, $300
Automata Design Associates
1570 Arran Way
Dresher, PA 19025
215-646-4894
Turbo Prolog: AI language, $99.95
Borland International
4585 Scotts Valley Dr.
Scotts Valley, CA 95066
408-438-8400
Xsys: expert system [Lisp], $995
California Intelligence
912 Powell St. #8
San Fransisco, CA 94108
415-391-4846
Prolog V: AI language, $69.95/$99.95
Chalcedony Software, Inc.
5580 La Jolla Blvd, Suite 126B
La Jolla, CA 92037
617-483-8513
ES/P Advisor: expert system [Prolog], $895
Prolog-1: AI language, $395
Prolog-2 Interpreter and Compiler: AI Language, $1895
Expert Systems International
1150 First Ave.
King of Prussia, PA 19406
215-337-2300
Xi: expert system, $795
Expertech
Expertech House, 172 Bath Rd.
Slough, Berks SLI 3XE, ENGLAND
0753-821321; USA, 415-367-6264 or 617-470-2267
Exsys 3.0: expert system [C], $395
Exsys Inc.
PO Box 75158, Contract Sta. 14
Albuquerque, NM 87194
505-836-6676
TIMM-PC: expert system [Fortran 77], $9500
General Research
7655 Old Spring House Rd.
McLean, VA 22102
703-893-5900
Expert Ease: expert system [UCSD Pascal], $695
Expert Edge: expert system, $795
Human Edge Software
2445 Faber Pl.
Palo Alto, CA 94303
CA: 800-824-7325, elsewhere: 800-624-5227
Knowol: expert system, $39.95
Intelligent Machines Co.
3813 N. 14th St.
Arlington, VA 22201
703-528-9136
KEE: expert system
IntelliCorp
1975 El Camino Real W.
Mountain View, CA 94040
415-965-5500
Experteach: expert system [Lisp, Prolog, Pascal, dBase II], $475
Intelliware, Inc.
4676 Admiralty Way, Suite 401
Marina del Rey, CA 90291
213-305-9391
Ex-Tran: expert system, $3000
Jeffrey Perrone & Associates
415-431-9562
KDS: expert system [assembler], $795 (development), $150 (playback)
KDS II: expert system, $945
KDS Corp.
934 Hunter Rd.
Wilmette, IL 60091
312-251-2621
Trouble Shooter: expert system, $250
Kepner-Tregoe
609-921-2806
Insight: expert system [Turbo Pascal], $95/$485
Level 5 Research
4980 S A1A
Melbourne Beach, FL 32751
(moved to 503 Fifth Ave., Suite 201, Indiatlantic, FL 32903?)
305-729-9046
Daisy: expert system [muLisp-85]
Lithp Systems BV
Meervalweg 72
1121 JP Landsmeer
The Netherlands
Micro-Prolog: AI language, $395
Logic Programming Associates
31 Crescent Drive
Milford, CT 06460
203-872-7988
MProlog: AI language, $725
Logicware, Inc.
5000 Birch St., West Tower, suite 3000
Newport Beach, CA 92660
416-665-0022
(70 Walnut St.
Wellesley, MA 02181
617-237-2254?)
Reveal: expert system
McDonnell Douglas
Knowledge Engineering Products Division
20705 Valley Green Dr.
Cupertino, CA 95014
408-446-7406
MicroExpert: expert system [Turbo Pascal/Apple Pascal], $49.95
McGraw-Hill
PO Box 400
Hightstown, NJ 08520
or 1221 Avenue of the Americas
New York, NY 10020
NY: 212-512-2999, elsewhere 800-628-0004
Guru: integrated software with expert system, $3000
Micro Data Base Systems
PO Box 248
Lafayette, IN 47902
317-463-2581
Expert: expert system [Forth], $100
Mountain View Press
PO Box 4656
Mountain View, CA 94040
415-961-4103
XLISP: AI language, $6 (disk 148)
Expert System of Steel: expert system, $6 (disk 268)
Esie: expert system, $6 (disk 398)
Prolog: AI language, $6 (disk 405)
PC-SIG
1030 E. Duane Ave, Suite J
Sunnyvale, CA 94086
408-730-9291; CA 800-235-6647, elsewhere 800-235-6646
OPS83: expert system [C]
Production Systems Technologies, Inc.
642 Gettysburg St.
Pittsburgh, PA 15206
412-362-3117
Micro-Prolog Professional: AI Language?, $395
Programming Logic Systems
203-877-7988
1st-Class: expert system, $20/$495
Programs in Motion, Inc.
10 Sycamore Rd.
Wayland, MA 01778
617-653-5093
Rulemaster: expert system, $995
Radian Corp.
8501 Mo-Pac Blvd.
PO Box 9948
Austin, TX 78766
512-454-4797
Small-X: expert system, $125/$225
RK Software
PO Box 2085
West Chester, PA 19380
215-436-4570
Savvy PC: expert system, $139
Savvy
505-265-1273
Knowledge Engineering System II: expert system [C], $4000
Software Achitecture & Engineering
1500 Wilson Blvd., suite 800
Arlington, VA 22209
(703)276-7910
Wizdom: expert system, $1250/$2050
Software Intelligence Lab
1593 Locust Ave.
Bohemia, NY 11716
212-747-9066/516-589-1676
Xper: expert system, $95
Softway
415-397-4666
Prolog-86: AI language, $125
Solution Systems
335-P Washington St.
Norwell, MA 02061
617-659-1571
Microdyn: expert system, $300
Stochos
518-372-5426
M1: expert system [Prolog], $5000
KS-300: expert system
Teknowledge Inc.
525 University Ave.
Palo Alto, CA 94301
415-327-6640
Personal Consultant: expert systems [IQ Lisp], $950
Personal Consultant Plus: expert systems [IQ Lisp], $2950
Texas Instruments
PO Box 80963
Dallas, TX 75380-9063
800-527-3500
Class
Texpert Systems, Inc.
12607 Aste
Houston, TX 77065
713-469-4068
-Paul S. R. Chisholm, UUCP {ihnp4,cbosgd,pegasus,mtgzz}!lznv!psc
AT&T Mail !psrchisholm, Internet mtgzz!lznv!psc@topaz.rutgers.edu
The above opinions may not be shared by any telecomm company.
------------------------------
Date: Thu 24 Apr 86 17:09:32-CST
From: CMP.BARC@R20.UTEXAS.EDU
Subject: Re: Long Beach AI Conference
The conference to which Girish Kumthekar alluded is presumably
AI 1986
AI & Advanced Computer Tecnology Conference & Exhibition
April 29 - May 1, 1986
Long Beach Convention Center
Long Beach, CA
There are sessions on Strategic Defense, Medicine, Office Automation, Printing/
Publishing, Expert Systems, Image Processing, Automated Guided Vehicles,
Knowledge Information Processing Systems, Microcomputers, Machine Translation,
The Investment Community, Engineering Design, Automated Manufacturing Systems,
Banking/Finance, Cognitive Modeling, Business, Aerospace, Speech Processing,
AI Languages, Graphics and User Interface, Expert System Development Systems,
Natural Language Interfaces and Training. There are three tutorials, *An
Executive Primer to AI*, *Understanding Expert Systems* and *Understanding
Natural Languages* (the first of which is on April 28). Finally, there is a
workshop on expert systems for manufacturing and process engineers, and about
75 exhibitors.
It's too late for pre-registration, of course, but additional information can
be obtained from
Tower Conference Management Co.
331 W. Wesley St.
Wheaton, IL 60187
(312) 668-8100 Telex: 350427
I have no connection with this conference, other than being on their mailing
list.
Dallas Webster
Burroughs Austin Research Center
------------------------------
Date: 25 Apr 86 10:11:14 +1000 (Fri)
From: "ERIC Y.H. TSUI" <decvax!mulga!aragorn.oz!eric@decwrl.DEC.COM>
Subject: Reply to Daniel Davison
In article <12199933765.28.DAVISON@SUMEX-AIM.ARPA> DAVISON@SUMEX-AIM.ARPA
(Daniel Davison) writes:
>
>There were several technical reports mentioned in a recent AIlist that I'd
>like to get...but I don't know how. Would some kind soul send me a note
To Louisiana State University (LSU):
cindy@lsu.csnet
decvax!ihnp4!cmucspt!avie
I have received quite a few reports from LSU (by post) using the above address.
Eric Tsui eric@aragorn.oz
[AIList ran an extensive list of report sources during the first
year, mostly taken from the SIGART Newsletter. I can send reprints
on request, but see the next message. -- KIL]
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Sources for Technical Reports
Here are the report sources Daniel Davison requested:
LSU
Requests for copies should be addressed to cindy hathaway, technical reports
secretary, computer science department, louisiana state university,
baton rouge, louisiana 70803; or cindy@lsu on csnet.
CMU
Technical reports are available from
Information Services
The Robotics Institute
Carnegie-Mellon University
Pittsburgh, PA 15213.
or
Serviou@H.CS.CMU.EDU
Please direct such requests to me instead of AILIST.
[Lawrence Leff maintains a distribution service for abstracts and
reports. He is also the source of the BIB-formatted bibliographies
AIList carried a few weeks ago. -- KIL]
------------------------------
Date: Thu 24 Apr 86 09:12:47-PST
From: Mark Richer <RICHER@SUMEX-AIM.ARPA>
Subject: Net addresses
Eswaran, [AIList, AI-ED] :
A lot of us on the arpanet now seem to have at least two equivalent addresses,
either ending in .arpa or .edu. The latter form, .edu, is the newer one.
In your message to AIList you listed your address as
eswaran@h.cs.cmu.edu.arpa.
[It is improper and ineffective to use both, and any mailer which
constructs such a path name should be fixed. I believe the CMU
mailer that had this problem was fixed a couple of months ago. -- KIL]
Without thinking, I typed the address in like that in the TO field of
my mail program which rejected the host name. It wasn't immediately
obvious to me what was wrong, so I thought I'd bring this type of
problem to everyone's attention because it's bound to come up again.
mark
------------------------------
Date: 24 Apr 1986 15:06:36-EST
From: kushnier@NADC
Subject: Re: Compuscan
Dear John,
With reference to the verb "to XEROX", I will remain forever humbled under
the mighty name of your company, and will never again be so bold as to use
it as a part of speech. By the way, the typed originals were COPIED once on
a XEROX Model 1048. Would you like to comment on why the Compuscan Page
Reader had so much trouble?
Ron Kushnier
kushnier@nadc.arpa
------------------------------
Date: 24 APR 86 15:07-N
From: DESMEDT%HNYKUN52.BITNET@WISCVM.WISC.EDU
Subject: do not "xerox" this message
Reply to the person from Xerox objecting to the use of the verb "xerox":
In your recent contribution to Ailist-Digest, you object to the use of
the word "xerox" as a verb (and in lowercase). Your argument seems to be
that Xerox Corp. makes more office equipment than just copiers, and that
the verb "xerox" could therefore be ambiguous. Moreover, if "xerox"
stands for copying on just any copier, the word doesn't quite cover its
original meaning.
Although I tend to avoid the use of trade marks for generic concepts, I
would like to point out that one can take different stands with respect
to linguistic rules vs. linguistic creativity, and you might be arguing
from the wrong stand.
One view of language is "prescriptive linguistics": to see the rules as
laws that one individual or group of individuals tries to impose on
others.
Another view is "descriptive linguistics": to see the rules as something
that defines what is generally agreed upon by a linguistic community.
As an individual, you seem to defend a rule that from the original
meaning of "photocopy on Xerox equipment", the verb "xerox" can be
extended only to "process on any Xerox equipment". You're taking a
prescriptive viewpoint there, and it's not going to work, because the
linguistic community has already decided long ago that "xerox" means
"photocopy on any copier".
Once the majority of a linguistic community has agreed upon a change, it
is usually hard or impossible to undo that change, even if your arguments
against it are well motivated. Therefore I want to discourage you from
taking a prescriptive viewpoint, and advise you to go by the majority, or
at least, not to pretend you don't understand what "xerox" means for the
majority.
The use of the word "xerox" is not an isolated case. Some more examples?
In Belgium, most people use the word "bic" to mean "any ballpoint pen".
The fact that Bic now also makes disposable rasors and lighters does not
affect this use at all. If I ask any Belgian "do you have a bic I can
use?" nobody will think I mean a rasor or a lighter. Same holds for
"kodak", which means "any camera". If I ask any Belgian whether he owns a
kodak and he doesn't own a camera, he will probably answer "no" even if
he has a Kodak copier back in his office (on which his secretary is
xeroxing his documents).
The use of a trade name for a more generic concept is a particular case
of metonymy, and instead of crusading against it, the subscribers of this
newsletter would probably be more interested in a computer simulation of
metonymic processes to see whether it is able to come up with "xerox" in
the way criticized by you.
Koenraad De Smedt
Psychological Lab.
University of Nijmegen
The Netherlands
[Personal preferences and customary usage aside, the laws relating
to trademark usage are fairly clear; Xerox must insist on proper
use of their name or they will lose the right to use it. -- KIL]
------------------------------
End of AIList Digest
********************
∂29-Apr-86 0357 LAWS@SRI-AI.ARPA AIList Digest V4 #106
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 29 Apr 86 03:56:55 PDT
Date: Mon 28 Apr 1986 22:59-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #106
To: AIList@SRI-AI
AIList Digest Tuesday, 29 Apr 1986 Volume 4 : Issue 106
Today's Topics:
AI Tools - Common LISP Coding Standards & String Reduction &
PARLOG for Unix,
Representation - Shape,
Philosophy - Computer Consciousness
----------------------------------------------------------------------
Date: Thu 24 Apr 86 07:50:46-PST
From: George Cole <GCOLE@su-sushi.arpa>
Subject: Hooks, Rings, Shapes & Background Processes
The knowledge about the individual items and their interactions must contain
the knowledge about their common environment, either as an unstated assumption
or perhaps has common knowledge. A hook and ring will not hold together (even
if they start together) unless the ring is "hanging" from the hook, because
of gravity or magnetism or a strong wind blowing past in the correct direction.
Nor will it stay hanging if the balance of forces (gravity down, wind blowing
past the plastic hook up) is upset beyond the stable limit. (If gravity is
increased 100-fold, will the tensile strength of the hook suffice to support
the ring?) And for a last concern, is there any motion of the hook or ring
that will cause the degradation of either, such as friction wearing away at
the material and thus lowering the tensile capacity?
These environmental and process contextual aspects do not seem to
yield easily to expression in a stable or fixed-point language.
George S. Cole
GCole@SU-SUSHI.ARPA
------------------------------
Date: Fri, 25 Apr 86 12:37:52 EST
From: mcguire@harvard.HARVARD.EDU (Hugh McGuire)
Subject: Re: Common LISP coding standards
Perhaps Marty Hall was seeking some guide to LISP style, similar to
Ledgard's (et al.'s) *Pascal with Style*; I certainly would find such
useful, and perhaps others would also. Steele's (et al.'s) *Common
LISP*, while it completely specifies the language, mentions style only
occasionally. For example, consider the following simple questions:
Under Lexical Scoping, how much should a programmer use variables with
identical names? Should one use "#'" (the abbreviation for special
form FUNCTION) whenever possible? When is a short COND-construct more
appropriate than an IF-construct? How should one decide between
iteration and recursion? Will asterisked global variables or constants
(e.g. "*visible-windows*") be confused with the system's asterisked
symbols?
--Hugh
(mcguire@harvard.HARVARD.EDU)
------------------------------
Date: 24 Apr 86 16:03:06 GMT
From: hplabs!hao!noao!terak!doug@ucbvax.berkeley.edu (Doug Pardee)
Subject: Re: String reduction
> TRAC is pretty easy to implement; I have an incomplete version written in
> C that I did some years back. I also have a paper on TRAC which is probably
> long out of print by now.
If anyone cares, TRAC stands for Text Reckoner And Compiler, and is
trademarked.
It is discussed at some length in Peter Wegner's book, "Data Structures,
Information Processing and Machine Organization" (the title may be off
a bit, the book is at home and it's hard to remember such a lengthy
title :-)
Stanford used to have a version they called WYMPI. The main differences
were the use of "*" instead of "#" and -- more significantly -- they
permitted string (macro) names to be specified as the operator, rather
than requiring as TRAC does that strings be specifically called with
the "cl" operator. In other words, you could say *(macro,...) instead
of #(cl,macro,...). Wegner leaves it as an exercise to the reader to
show why the "cl" was an important architectural feature of TRAC which
shouldn't have been tampered with. Something about trying to make
#(cl,macro,...) == #(macro,...) and at the same time making
##(cl,macro,...) == ##(macro,...)
--
Doug Pardee -- CalComp -- {elrond,seismo,decvax,ihnp4}!terak!doug
------------------------------
Date: 24 Apr 86 12:32:50 GMT
From: ucdavis!lll-lcc!lll-crg!seismo!mcvax!ukc!ptb@ucbvax.berkeley.edu
(P.T.Breuer)
Subject: Re: String reduction
In article <1031@eagle.ukc.ac.uk> sjl@ukc.ac.uk (S.J.Leviseur) writes:
>Does anybody have any references to articles on string reduction
>as a reduction technique for applicative languages (or anything
>else)? They seem to be almost impossible to find! Anything welcome.
John Horton Conway (the Prince of Games, memorably Life) of Cambridge
University (UK) Pure Maths. Dept. some years ago invented a computing
language that seems to me to proceed by Markovian string reduction.
It is extremely sneaky at recognising substrings for substitution -
obviously the major cost in any such approach - and does this task
efficiently. The trick is to make up your strings as the product of
integer primes instead of by alphanumeric concatenation. The production
rules of a program script consist of single fractions. To apply the
rules to an incoming 'string' you choose the first fraction in the script
that gives an integer result on multiplication with the integer 'string'
and take the result as the outgoing string, then go to the top of the
script with the new string and start again. The indices of prime powers
in the string serve as memory cells 'x'. The denominator of the fractions
serve as 'if x> ..' statements, with the numerators as 'then x=x+-..'
components. J.H.C.'s (the middle initial is to help him remain incognito)
interest was in the fact that the Godel numbers of programs written in this
language are easily calculable. Conway has written out on a single sheet of
paper the Godel number of the program that simulates any given program from its
Godel number. The G-No. of the prime number program is relatively short.
I will intervene with J.C. to obtain more info, if requested.
U.No.Hoo advises generic statement here.
------------------------------
Date: 23 Apr 86 18:08:55 GMT
From: ucdavis!lll-lcc!lll-crg!caip!seismo!mcvax!ukc!icdoc!sg@ucbvax.
berkeley.edu (Steve Gregory)
Subject: PARLOG for Unix
SEQUENTIAL PARLOG MACHINE
We are now distributing the first release of our sequential PARLOG
system, to run on Unix machines. This system is based on an abstract
instruction set -- the SPM (Sequential PARLOG Machine) -- designed for
the sequential implementation of PARLOG. The system comprises an SPM
emulator, written in C; a PARLOG-SPM compiler, written in PARLOG; and a
query interpreter also written in PARLOG. An environment allows users to
create, compile, edit and run programs.
The system is a fairly complete implementation of the PARLOG language.
Unlike previous implementations of PARLOG, and of other parallel logic
programming languages, there is no "flat" requirement for guards; guards
may contain any "safe" PARLOG conjunction. A powerful metacall facility is
provided.
The SPM instruction set was designed by Steve Gregory. The system has
been implemented by Alastair Burt, Ian Foster, Graem Ringwood and Ken
Satoh, with contributions by Tony Kusalik. The work has been supported
by the SERC, ICL and Fujitsu.
The SPM system is currently available, in object form, for the Sun
and Vax under Unix 4.2; it is distributed on a tar format tape, which
includes all documentation. Anyone interested in obtaining a copy should
first contact me at the following address, to request a copy of the licence
agreement. The software will then be shipped on receipt of the completed
licence and prepayment of the handling fee.
Steve Gregory Telephone: +44 1 589 5111
Dept. of Computing Telex: 261503 IMPCOL G
Imperial College JANET: sg@uk.ac.ic.doc
London SW7 2BZ ARPANET: sg%icdoc@ucl-cs.arpa
England uucp: ...!mcvax!ukc!icdoc!sg
------------------------------
Date: Thu, 24 Apr 86 12:20:50 gmt
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: Re: Lucas on AI & Computer Consciousness
Tom Schutz says in Vol 4 # 80 :-
> But I hope that these researchers and their fans do not delude themselves
> into thinking that the only aspect of the universe which exists is the
> aspect that science can deal with.
One aspect of human behavior is "politics". Can there really ever be
political *science*? How would you *model* many minds acting as one?
Tom also says :-
> 2) There is a dualism of the mental and the physical with
> mysterious interactions between the two realms, and
Mysterious indeed! Consider "I *feel* ill", and the interactions
between mind and body, such as "butterflies in the stomach".
He adds :-
> 3) Other possibilities which no one has thought of yet.
of which there are an infinity? Is there of a *real* example of the
result that "sigma 2**(-n)" is 2? We bootstrap our consciousness
from the cradle, 0, to awareness, 1. Do we "multiply by infinity"
to get there?
Gordon Joly
ARPA: gcj%uk.ac.qmc.maths%uk.ac.qmc.cs@ucl-cs.arpa
UUCP: ...!seismo!mcvax!ukc!qmc-cs!qmc-ori!gcj
"I have a pain in the diodes, all the way down my left side."
-- Marvin the Paranoid Android.
------------------------------
Date: Thu 24 Apr 86 01:08:46-PST
From: Lee Altenberg <ALTENBERG@SUMEX-AIM.ARPA>
Subject: Machine emotion, cat emotion?
My perspective on the possibilities of machines having emotion
stems from my experience with animals and other life. I am concerned
about whether animals suffer from slaughtering them, because eating meat
is morally unacceptable if they do. But there is no a priori reason to
exclude plants from the question, too. How does one know if another is
suffering? I think we "know" in the case of people because they act in
ways that we would only act if WE were suffering. When I've
accidentally stepped on a cat's foot, it has made a noise that sounds
horrible to me, and run out of the way. This is close enough to what I
would do, were I in pain, so that I feel "oh no, I hurt the cat." But
sometimes I have accidentally stepped on a dog's foot and it didn't make
such sounds, and I don't know what it felt. I can't imagine that it
didn't hurt (based on what I would feel if that had been my foot) but
who knows?
Now, when we get to plants, there is nothing they could do that
would resemble what I would do were I in pain. Cartoons with
anthropomorphised plants show them doing such things as wilting or
erecting in response to events. Here there is a fourtuitous parallelel
between human body language the health of the plant's water balance.
But in general?
My point is that the question of emotion separates into two
issues, the question of one's own emotions, and the question of others'.
I am claiming that operationally, the question of emotions in other
people, animals, plants, and machines are equal in this latter category.
Consider the dynamics of how people perceive each other's
emotions from an evolutionary standpoint. The display of emotion to
others, and the recognition of emotion in others, plays a central role in
human relations, which strongly impact human Darwinian fitness. Now,
machines can be designed to mimic human expression of emotions, through
icons and the use of emotion expressing language or sounds. So the
question regarding machine emotions I would emphasize is, what sort of
emotional relationships do we WANT between the human user and the machine?
I would guess that there is some stuff of practical relevance in this
question, to the extent that a computer user's performance is
affected by his or her emotional reactions to what happens during
their sessions. Suppose after five consecutive run-time errors, the
machine posted the message,
"I'm sorry to say, but we've hit a run-time error AGAIN! Keep
working on trying to figure out the problem, though. There's got to be
a solution!"
Well, it's a bit contrived, but you get my point. It could be
an area to develop.
-Lee Altenberg
------------------------------
Date: 25 Apr 86 09:47:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Some replies on computer consciousness
> ...consciousness is an emergent phenomenon. The more
> complex the nervous system of an organism, the more likely one is to
> ascribe consciousness to it. Computers, at present, are too simple,
> regardless of performance. I would have no problem believing a
> massively parallel system with size and connectivity of biological
> proportions to be conscious, provided it did interesting things.
1. Note that we've gone from a purely external criterion to an combined
one which asks about both performance and internal structure.
I quite agree that both these are relevant.
2. The assumption is that it's the connectivity per se (ie structure),
that consciousness emerges from. This may be true, but it's not
a given. Eg suppose we had a "massively parallel system with
size [I assume "logical size" is meant here] and connectivity
of biological proportions" which was implemented in wooden parts
of normal macroscopic physical size, with a switching speed of about
1 second. It's not just obvious to me that such a (large, slow,
wooden) thing, though structurally identical to a brain, would
generate consciousness (nor that it wouldn't).
> From: Mark Ellison
>
> Mechanism M [brain] causes C [consciousness] ? You know many
> people who (may) have brains, and you have no DIRECT evidence
> that they are conscious.
Right, but I have strong circumstantial evidence - eg they have a
brain, (like me) and they can do long division (like me).
> You only have direct evidence of one
> case of C (barring ESP, etc.), and no DIRECT evidence of that
> person's brain. Except for the performances in each case.
Huh? Surely I have other grounds for believing that I, and other
people, have brains besides their performance. Like analogy,
biology, etc.
> We only know of their ability to feel pain, experience shapes, colors,
> sounds, etc., by their reactions to those stimuli. In other words,
> by their performance. But on the other hand their performance might
> not involve abstract statements. ....I would argue that "raw
> feelings" in others are known only by their performance.
Well, I think this simply isn't so - do you mean to deny that
the fact that they have brains in no way supports the hypothesis
of their ability to feel pain, etc??? Especially given the
neurological evidence we have that brain activity seems to
directly cause experiences (like the neurosurgeon pokes your
cortex and you say "I see a red flash")? It seems just obvious
to me that we rationally attribute consciousness to others
because of both criteria, ie performance and brains.
> One criterion that I have not seen yet proposed is the following.
> It is more useful to pretend that people are conscious than not.
> They tend to cause you less pain, and are more likely to do what you want.
> So I'll believe someone's 8600 or Cray is conscious if it works better,
> according to whatever criteria I have for that at the moment, when I so
> believe.
Well, I was speaking of Truth, not pragmatics. It may be that I
play a better game of chess against a metallic opponent if I
attribute to it motives of greed, revenge, etc. That hardly
seems to settle the question of whether it really has these
features.
BTW, I think most of these claims about computer consciousness
are mis-spoken - I think what people mean to say (or should) is
that the Wonderful Futuristic computer would be really
*intelligent.* Since the concept of intelligence is essentially
one of performance, I agree with such claims. A computer that
could hold a general, intelligent, English conversation is, ipso
facto, intelligent. It does *not* follow, either conceptually
or practically, that such a machine would be conscious (nor that
it wouldn't, of course), in the normal "seeing-yellow,
feeling-pain" sense of the word, although everyone seems just to
assume this. To put it another way, just because intelligent
behavior in a human is decisive evidence for consciousness (where
we have the underlying fact of brain-hood), it does not follow
that it is decisive evidence in the case of a computer.
John Cugini <Cugini@NBS-VMS>
------------------------------
Date: Fri, 25 Apr 86 10:51:56 mst
From: crs%f@LANL.ARPA (Charlie Sorsby)
Subject: Re: performance considered insufficient
References: <VAX-MM(186)+TOPSLIB(117)+PONY(0).18-Apr-86.09:53:30.SRI-IU.ARPA>
> Are viruses conscious? How about protozoa, mollusks, insects, fish,
> reptiles, and birds? Certainly some mammals are conscious. How about
> cats, dogs, and chimpanzees? Does anyone maintain that homo sapiens
> is the only species with consciousness?
>
> My point is that consciousness is an emergent phenomenon. The more
> complex the nervous system of an organism, the more likely one is to
> ascribe consciousness to it. Computers, at present, are too simple,
> regardless of performance. I would have no problem believing a
> massively parallel system with size and connectivity of biological
> proportions to be conscious, provided it did interesting things.
I've been following, with interest, the debate about the possibility of
machine consciousness. I have a question:
Do you consider (each of you) consciousness a binary phenomenon? Does one
(or something) either have, or not have, consciousness?
Or, is there a continuum of consciousness, with some entities in possession
of just a *little* consciousness while others have more?
I suspect, based on what I have read here, that there is no consensus
opinion, that some believe it is binary while others subscribe to the
continuum idea (with, perhaps, others believing some intermediate theory).
Is there a prevailing view among AI researchers?
Use your own judgment as to whether to post or mail your reply. If I
receive many mail replies, I'll try to summarize and post.
Charlie Sorsby
...{cmcl2, ihnp4, ..}!lanl!crs
crs@lanl.arpa
------------------------------
End of AIList Digest
********************
∂01-May-86 0320 LAWS@SRI-AI.ARPA AIList Digest V4 #107
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 1 May 86 03:20:19 PDT
Date: Wed 30 Apr 1986 21:42-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #107
To: AIList@SRI-AI
AIList Digest Thursday, 1 May 1986 Volume 4 : Issue 107
Today's Topics:
Seminars - Multivalued Logics (UPenn) &
Mechanisms of Analogy (UCB) &
Recursive Self-Control for Rational Action (SU) &
Reasoning about Multiple Faults (SU) &
Knowledge in Shape Representation (MIT) &
GRAPHOIDS: A Logical Basis for Dependency Nets (SU) &
Decentralized Naming in Distributed Computer Systems (SU) &
Learning in Time (Northeastern) &
Characterization and Structure of Events (SRI)
----------------------------------------------------------------------
Date: Mon, 28 Apr 86 14:29 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Multivalued Logics (UPenn)
Colloquium - University of Pennsylvania
3:00 pm 4/29 - 216 Moore School
MULTI-VALUED LOGICS
Matt Ginsberg - Stanford University
A great deal of recent theoretical work in inference has involved extending
classical logic in some way. I argue that these extensions share two
properties: firstly, the formal addition of truth values encoding intermediate
levels of validity between true (i.e., valid) and false (i.e., invalid) and,
secondly, the addition of truth values encoding intermediate levels of
certainty between true or false on the one hand (complete information) and
unknown (no information) on the other. Each of these properties can be
described by associating lattice structures to the collection of truth values
involved; this observation lead us to describe a general framework of which
both default logics and truth maintenance systems are special cases.
------------------------------
Date: Tue, 29 Apr 86 08:46:36 PDT
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Subject: Seminar - Mechanisms of Analogy (UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Spring 1986
Cognitive Science Seminar - IDS 237B
Tuesday, April 29, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``Mechanisms of Analogy''
Dedre Gentner
Psychology, University of Illinois at Champaign-Urbana
Analogy is a key process in learning and reasoning. This
research decomposes analogy into separable subprocesses and
charts dependencies. Evidence is presented that (1) once an
analogy is given, people map predicates and judge soundness
chiefly on the basis of common relational structure, as
predicted by the structure-mapping theory; (2) in contrast,
access to potential analogue depends heavily on common surface
features.
Accessibility and inferential power appear to be governed
by different kinds of similarity. This finer-grained analysis
of similarity helps resolve conflicting evidence concerning the
role of similarity in transfer.
------------------------------
Date: Mon 28 Apr 86 14:31:52-PDT
From: Anne Richardson <RICHARDSON@SU-SCORE.ARPA>
Subject: Seminar - Recursive Self-Control for Rational Action (SU)
DAY: May 5
EVENT: AI Seminar
PLACE: Jordan 050
TIME: 4:15
TITLE: Recursive Self-Control:
A Computational Groundwork for Rational Action
PERSON: John Batali
FROM: MIT AI Lab
Human activity must be understood in terms of agents interacting with
the world, those interactions subject to the details of the situation
and the limited abilities of the agents. Rationality involves an
agent's deliberating about and choosing actions to perform. I suggest
that deliberation and choice are themselves best viewed as activities of
the agent. This leads to a view of rationality based on "recursive
self-control" wherein the agent controls the activity of its body in
much the same way as a programmer controls a computational mechanism.
To prove that this view is really recursive, rather than just
meaninglessly circular, I describe a computer program whose architecture
illustrates how recursive self-control could work.
------------------------------
Date: Tue 29 Apr 86 13:12:59-PDT
From: Christine Pasley <pasley@SRI-KL>
Subject: Seminar - Reasoning about Multiple Faults (SU)
CS529 - AI In Design & Manufacturing
Stanford University
Instructor: Dr. J. M. Tenenbaum
Title: Reasoning About Multiple Faults
Speaker: Johan de Kleer
From: XEROX Palo Alto Research Center
Date: Wednesday, April 30, 1986
Time: 4:00 - 5:30
Place: Terman 556
Diagnostic tasks require determining the differences between a model
of an artifact and the artifact itself. The differences between the
manifested behavior of the artifact and the predicted behavior of the
model guide the search for the differences between the artifact and
its model. The diagnostic procedure presented in this paper reasons
from first principles, inferring the behavior of the composite device
from knowledge of the structure and function of the individual
components comprising the device. The system has been implemented
and tested on examples in the domain of troubleshooting digital
circuits.
This research makes several novel contributions: First, the system
diagnoses failures due to multiple faults. Second, failure candidates
are represented and manipulated in terms of minimal sets of violated
assumptions, resulting in an efficient diagnostic procedure. Third,
the diagnostic procedure is incremental, reflecting the interactive
nature of diagnosis. Finally, a clear separation is drawn between
diagnosis and behavior prediction, resulting in a domain (and
inference) independent diagnostic procedure which can be incorporated
into a wide range of inference procedures.
Visitors welcome!
------------------------------
Date: Tue, 29 Apr 86 19:13 EDT
From: Eric Saund <SAUND@OZ.AI.MIT.EDU>
Subject: Seminar - Knowledge in Shape Representation (MIT)
Thursday, 1 May 4:00pm Room: NE43-8th floor playroom
--- AI Revolving Seminar ---
KNOWLEDGE, ABSTRACTION, AND CONSTRAINT
IN SHAPE REPRESENTATION
Eric Saund
MIT AI Lab
What can make a profile look like an apple?
Early Vision teaches that you must know something if you are going to
see. The physics of light, the geometry of the eyes, the smoothness
of surfaces, all impose >constraint< on images. It is only through
the application of >knowledge< about this structure in the visual
world that early vision may invert the imaging process and recover
surface orientations, light sources, and reflectance properties. We
should take this lesson seriously in attempting intermediate and later
vision such as shape understanding.
Key to using knowledge in vision is building representations to
reflect the structure of the visual world. The mathematically
expressed laws of early vision do not help for later vision. How is
one to express the constraint on a profile that might qualify it as an
apple? In this talk I will discuss steps toward construction of a
vocabulary for shape representation rich enough to express the complex
and subtle relationships between locations and sizes of boundaries and
regions that give rise to object parts and shape categories. I will
describe three computational tools, "scale-space", "dimensionality-
reduction", and "functional role abstraction", for building symbolic
descriptors to capture constraint in shape information. Examples of
their use will be shown in a one-dimensional, model, shape domain.
------------------------------
Date: Tue 29 Apr 86 15:51:12-PDT
From: Benjamin N. Grosof <GROSOF@SU-SCORE.ARPA>
Subject: Seminar - GRAPHOIDS: A Logical Basis for Dependency Nets (SU)
JUDEA PEARL of the UCLA Computer Science Department will be speaking on
probabilistic reasoning
FRIDAY MAY 2 2:15pm JORDAN 040
GRAPHOIDS: A Logical Basis
for Dependency Nets
or
When would x tell you more about y
if you already know z
ABSTRACT:
We consider statements of the type:
I(x,z,y) = "Knowing z renders x independent of y",
where x and y and z are three sets of propositions.
We give sufficient conditions on I for the existence
of a (minimal) graph G such that I(x,z,y) can be validated
by testing whether z separates x from y in G. These
conditions define a GRAPHOID.
The theory of graphoids uncovers the axiomatic basis of
probabilistic dependencies and extends it as a formal
definition of informational dependencies. Given an
initial set of dependency relations, the axioms
established permit us to infer new dependencies by
non-numeric, logical manipulations, thus identifying
which propositions are relevant to each other in a
given state of knowledge. Additionally,
the axioms may be used to test the legitimacy of
using networks to represent various types of
data dependency, not necessarily probabilistic.
------------------------------
Date: 29 Apr 1986 1915-PDT (Tuesday)
From: Tim Mann <mann@su-pescadero.ARPA>
Subject: Seminar - Decentralized Naming in Distributed Computer Systems (SU)
This is to announce my PhD oral, scheduled for Tuesday May 6, 2:15 pm,
building 160, room 163B.
Decentralized Naming in Distributed Computer Systems
Timothy P. Mann
A key component in distributed computer systems is the naming facility:
the means by which global, user-assignable names are bound to objects,
and by which objects are located given only their names. This work
proposes a new approach to the construction of such a naming facility,
called \decentralized naming/. In systems that follow this approach,
the global name space and name mapping mechanism are implemented by the
managers of named objects, cooperating as peers with no central
authority. I develop the decentralized naming model in detail and
characterize its fault tolerance, efficiency, and security. I also
describe the design, implementation, and measured performance of a
decentralized naming facility that I have constructed as a part of the
V distributed operating system.
------------------------------
Date: Wed, 30 Apr 86 16:30 EST
From: SIG%northeastern.edu@CSNET-RELAY.ARPA
Subject: Seminar - Learning in Time (Northeastern)
Learning in Time
Richard Sutton (rich@gte-labs.csnet)
GTE Labs, Waltham, Ma.
Most machine learning procedures apply to learning problems in which
time does not play a role. Typically, training consists of the
presentation of a sequence of pairs of the form (Pattern, Category),
where each pattern is supposed to be mapped to the associated category,
and the ordering of the pairs is incidental. In real human learning, of
course, the situation is very different: successive input patterns are
causally related to each other, and only gradually do we become sure of
how to categorize past patterns. In recognizing spoken words, for
example, we may be sure of the word half way through it, after it has
all been heard, or not until several words later; our certainty of the
correct classification changes and becomes more certain over time. In
learning to make such classifications, is it sufficient to just
correlate pattern and category, and ignore the role of time? In this
talk, I claim that the answer is NO. A new kind of learning is
introduced, called Bootstrap Learning, which can take advantage of
temporal structure in learning problems. Examples and results are
presented showing that bootstrap learning methods require significantly
less memory and communication, and yet make better use of their
experience than conventional learning procedures. Surprisingly, this
seems to be a case where consideration of an additional complication --
the temporal nature of most real-world problems -- results in BOTH
better performance AND better implementations. These advantages appear
to make bootstrap learning the method of choice for a wide range of
learning problems, from predicting the weather to learning evaluation
functions for heuristic search, from understanding classical
conditioning to constructing internal models of the world, and, yes,
even to routing telephone calls.
Wednesday, May 14, 12:00 noon
161 Cullinane Hall
Sponsored by:
College of Computer Science
Northeastern University
360 Huntington Ave.
Boston, Ma
Host: Steve Gallant
(sig@northeastern.csnet)
------------------------------
Date: Wed 30 Apr 86 17:08:07-PDT
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - Characterization and Structure of Events (SRI)
THE CHARACTERIZATION AND STRUCTURE OF EVENTS
Douglas D. Edwards (EDWARDS@SRI-AI)
SRI International, Artificial Intelligence Center
11:00 AM, MONDAY, May 5
SRI International, Building E, Room EJ228 (new conference room)
Events were raised to prominence as a basic ontological category in
philosophy by Davidson, who used quantified variables ranging over
events in the logical analysis of assertions about causality and
action, and of sentences with adverbial modifiers. Drew McDermott
used the category of events in AI planning research to model changes
more complex than state transformations.
Despite the common use of events as an ontological category in
philosophy, linguistics, planning research, and ordinary language,
there is no standard characterization of events. Sometimes, as in
Davidson, they are taken to be concrete individuals. Other authors
think of them as types or abstract entities akin to facts,
propositions, or conditions; as such they are often subjected to
truth-functional logical operations, which Davidson considers to be
inapplicable. McDermott, following Montague in broad outline, thinks
of them as classes of time intervals selected from various possible
histories of the world. Other authors emphasize individuation of
events not just by time but also by spatial location, by the objects
or persons participating, or (Davidson) by their location in a web of
causes and effects.
In this talk I sketch a scheme for characterizing types of events
which illuminates the relationship between type and token events, the
internal structure and criteria of individuation of events, and the
relationship of events to other categories of entities such as
objects, facts, and propositions. Events turn out to be structured
entities like complex objects, not simple temporal or spatiotemporal
regions or classes of such.
------------------------------
End of AIList Digest
********************
∂01-May-86 0513 LAWS@SRI-AI.ARPA AIList Digest V4 #108
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 1 May 86 05:13:26 PDT
Date: Wed 30 Apr 1986 22:02-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #108
To: AIList@SRI-AI
AIList Digest Thursday, 1 May 1986 Volume 4 : Issue 108
Today's Topics:
Queries - Connection Machine Articles,
Project Description - Personality Modeling,
Techniques - String Reduction,
Expert Systems - SeRIES-PC for MS-DOS,
Representation - Shape Simulation and Recognition,
Linguistics - Xerox vs. xerox
----------------------------------------------------------------------
Date: 29 Apr 86 01:24:43 GMT
From: hplabs!sdcrdcf!usc-oberon!bacall!iketani@ucbvax.berkeley.edu
(Dana Todd Iketani)
Subject: connection machine articles
I am looking for some information about the Connection
Machine from MIT/Thinking Machines Inc. Could someone
send me some pointers for some real articles? Or an
article with a good bibliography? I've found plenty
of popular literature articles, but the only technical
paper I've found is the MIT memo by Hillis. Thanks
in advance.
d. todd Iketani
USENET: usc-cse!iketani
ARPANET: IKETANI@USC-ECL
------------------------------
Date: 29 APR 86 1404 UT
From: MCLOUH85%IRLEARN.BITNET@WISCVM.WISC.EDU
Subject: RESEARCH IN PROGRESS AT UCD IRELAND
I am doing research in AI at University College Dublin in Ireland.
In the research project we are working on we have set ourselves the
goal of developing a system which will play the part of a telephone
receptionist in "typed" telephone conversations. In our research we
have had to address ourselves to a number of problems in the areas
of planning and user modelling.
In this note I would like to say something about our work on user
modelling.
It became clear to us that in the live data which we collected
as part of the research that the receptionist seemed to have a
very rich model of the people she spoke to. These models were
much richer than any we have read about in the AI literature.
We discussed a number of the transcripts of conversations with
the receptionist and she confirmed that she was using quite detailed
models of the people she spoke to. In the vast majority of the cases
she had never met these people or spoken to them before the conversations.
It appears that quite early on in the conversation she would classify
the caller as being of a particular "type" and thereafter she would
apply any knowledge she had about that "type" of person to build a
basic model of the caller.
We have been working on applying this knowledge about stereotypes to
the system we are building and we have developed a knowledge structure
which we call a Persona. A Persona can contain knowledge about
the goals, plans, obligations, beliefs, and props which we associate with
a typical member of a class of people. It can also contain knowledge
about the types of situations in which we would normally find such people,
and the props which we might associate with them.
So far we have been working on delevoping Personae which model
common occupations, such as , the Salesman, the receptionist, the telephone
operator.
However, it is our intention in the future to try and develop Personae
which model particular attributes such as Friendly, Aggressive etc and to
see if we can develop ways of combining these to produce personality
models.
Well, having given an overview of the work we are doing, I would be
grateful if there are any readers who share our interest in Personality
modelling and stereotypes who might have any comments to make or
who might be able to reccommend particular papers in the area which
might be of interest to us.
Henry B McLoughlin.
Department of Computer Science
University College Dublin
MCLOUH85@IRLEARN
------------------------------
Date: 24 Apr 86 13:26:37 GMT
From: uwvax!harvard!cmcl2!philabs!linus!security!jkm@ucbvax.berkeley.edu
(Jonathan K.Millen)
Subject: Re: String reduction
If you are interested in an applicative Lisp-like language
based on string substitution and reduction, you might want to look
at "TRAC, A Procedure-Describing Language for the Reactive Typewriter",
by Calvin N. Mooers, Comm ACM, Vol. 9, No. 3, March, 1966.
Jon Millen
------------------------------
Date: Tue 29 Apr 86 09:41:32-PDT
From: Lou Fried <FRIED@SRI-KL.ARPA>
Subject: Expert Systems for MS-DOS
Please include:
SeRIES-PC, language IQLISP, $5,000
SRI International
333 Ravenswood Avenue
Menlo Park, CA 94025
Contact: Bob Wohlsen, x 4408
------------------------------
Date: Tue, 29 Apr 86 15:47 EDT
From: Seth Steinberg <sas@BBN-VAX.ARPA>
Subject: Shape -- Simulation and Recognition
This hooks, rings and shapes discussion points at that AI contains a
lot of simulation. One useful model of AI programming is to view a
program as an intertwining of simulation and recognition, especially if
you are willing to think of these concepts a bit more generally than
ordinarily.
- A game playing program will play the game forwards (simulation) and
then choose a course of play to follow (recognition).
- A logic programming system will follow the ramifications of an
assertion by forward chaining (simulation) and then seek a particular
fact in the rule base (recognition).
- A robotics program will examine its goals and its model of the world
(recognition) and then test if a pariticular motion is useful or
possible (simulation). (Who was it who wrote in saying that all he
needed was an AT function and everything else would be easy?)
[That was a quote from Peter Cheeseman's early days. -- KIL]
This is not a strict breakdown, but rather a useful insight which
explains why certain problems are solved in certain ways. Thinking of
part of the program as the simulator and part as the recognizer can
reveal two of the conflicting forces in the resolution of the problem.
No matter how they are described, there are two fundamental dynamic
elements which force the tradeoffs required to engineer a working
program.
- Anoop Gupta who is working on implementing OPS5-like rule resolution
systems on parallel machines noted that programs broke into Problem
Space Search which grows exponentially (simulation) which is where the
parallelism comes from and Knowledge Space Search which applies the
programs knowledge of the domain to restrict the growth rate of
parallelism (recognition).
- A number of other researchers have broken problem solving into the
marshalling of alternatives (simulation) followed by the focussing of
attention on the most promising (recognition). Phil Agre argues that
one purpose of consciousness is to control the direction of attention
to the alternatives.
My personal feeling is that AI isn't going to tackled everyday
knowledge until it starts simulating everyday things. Steamer, the
large steam engine expert contained a gigantic Fortran program
(rewritten in Lisp) to simulate the engine. I don't know of any
programs that simulate a kitchen. AI has already borrowed a lot of
object oriented programming from Simula which is a simulation language.
Maybe AI programmers, being forced to deal with the problems of
simulation will find other as yet neglected tools.
Seth Steinberg
------------------------------
Date: Tue, 29 Apr 86 11:12:48 cdt
From: bulko@SALLY.UTEXAS.EDU (Bill Bulko)
Reply-to: bulko@sally.UUCP (Bill Bulko)
Subject: Re: do not "xerox" this message
I don't see what the big deal is about using "xerox" as a verb. I often
hear "Coke" used to mean "[virtually any] carbonated soft drink". How
often do you order a Coke at a fast-food place and get Pepsi or RC?
If anything, the use of "xerox" as a verb is a tribute to the
contribution Xerox has made to photocopying.
Bill
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
"In the knowledge lies the power." -- Edward A. Feigenbaum
"Knowledge is good." -- Emil Faber
Bill Bulko The University of Texas
bulko@sally.UTEXAS.EDU Department of Computer Sciences
------------------------------
Date: Tue 29 Apr 86 11:40:41-PDT
From: Pat Hayes <PHayes@SRI-KL>
Subject: Trademarks
Laws concerning trademark usage aside, De Smedt is perfectly correct
in pointing out that the verb 'to xerox', meaning to copy on a
dry-xerographic copier, and associated constructions ( a xerox copy,
etc. ), are now in fact part of the language. The distinction between
'xerox' and 'Xerox' seems quite clear, and it might be more sensible
for the company to insist on 'correct' usage of the latter rather than
the former. Its no use, guys, you can't stop people using the word in
the way they want to. A dictionary which omitted 'to xerox' would not
be accurate. Its an inevitable consequence of the fact that for many
years, all the copiers WERE Xerox machines, just like Bics and Kodaks.
One can't have no competitors while a new technology is entering the
marketplace, and expect not to be identified with it. Especially if
one has also invented a neat, original, snappy name for it ( like Bic
and Kodak ). Its the price of success.
Pat Hayes
Schlumberger ( a word which will never enter the language )
------------------------------
End of AIList Digest
********************
∂02-May-86 0214 LAWS@SRI-AI.ARPA AIList Digest V4 #109
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 2 May 86 02:13:47 PDT
Date: Thu 1 May 1986 22:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #109
To: AIList@SRI-AI
AIList Digest Friday, 2 May 1986 Volume 4 : Issue 109
Today's Topics:
Bibliography - References #4
----------------------------------------------------------------------
Date: 9 Apr 86 13:21:26 GMT
From: allegra!mit-eddie!think!harvard!seismo!mcvax!ukc!dcl-cs!nott-cs!
abc@ucbvax.berkeley.edu (Andy Cheese)
Subject: Bibliography - References #4
IDA83a *
Ida T. & Sato M. & Hayashi S. & Hagiya M. & Kurokawa T. & Hikita T. &
Futatsugi K. & Sakai K. & Toyama Y. & Matsuda T.
Higher Order: Its Implications to Programming Languages and Computational
Models
ICOT Research Center, Technical Memorandum TM-0029
October 1983
IDA84a *
Ida T. & Konagaya A.
Comparison of Closure Reduction and Combinatory Reduction Schemes.
ICOT Technical Report TR-072
August 1984
INGA78a
Ingalls D.
The Smalltalk-76 Programming System Design and Implementation
Proc. SIGPLAN Conf. on Princ. of Prog. Langs., pp 9-15
1978
INMO84a
INMOS
IMS T424 Transputer Data Card
INMOS , Jan 1984
INMO84b
INMOS
OCCAM Data Card
INMOS , June 1984
INMO84c
INMOS
OCCAM User Group Newsletter No.1
INMOS , Summer 1984
INMO84d
INMOS
IMS T424 Transputer : Preliminary Data
INMOS , August 1984
INMO84e
INMOS ltd
Occam Programming Manual
Prentice Hall International Series in Computer Science
January 1984
ISL81a
Islam N. & Myers T.J. & Broome P.
A Simple Optimiser for FP-like Languages
Proc. ACM Conf. on Functional Programming Languages and Computer
Architecture, New Hampshire, pp 33-40
october 1981
ITO83a *
Ito N. & Masuda K.
Parallel Inference machine Based on the Data Flow Model
( Also in "Proceedings of Int'l Workshop on High-Level Computer Architecture",
Los Angeles, 1984 )
ICOT Research Center, Technical Report TR-033
December 1983
ITO83b
Ito N. & Masuda K. & Shimizu H.
Parallel Prolog Machine Based on the Data Flow Model
ICOT Research Center, Technical report TR-035
September 1983
ITO83c *
Ito N. & Onai R. & Masuda K. & Shimizu H.
Prolog Machine Based on the Data Flow Mechanism
ICOT Research Center, Technical Memorandum TM-0007
May 1983
IWAT84a
Iwata K. & Kamiya S. & Sakai H. & Matsuda S. & Shibayama S. & Murukami K.
Design and Implementation of a Two-Way Merge Sorter and its Application to
Relational Database Processing
ICOT Research Center, Technical Report TR-066
May 1986
JARO86a *
Jarosz J. & Jawarowski J.R.
Computer Tree - The Power of Parallel Computations
Computer Journal, Vol 29, No 2, pp 103-108
April 1986
JAYA80a
Jayaraman B. & Keller R.M.
Resource Control in a Demand-Driven Data-Flow Model
In Proc. International Conf. on Parallel Processing, IEEE
1980
JAYA82a
Jayaraman B. & Keller R.M.
Resource Expressions For Applicative Languages
International Conf. on Parallel Processing, IEEE
August 1982
JEFF85a *
Jeffrey T.
The "mu"PD7281 Processor
Byte Magazine, Vol 10, no 12,
November 1985
JESS86a *
Jesshope C.
VLSI and Beyond
in BCS86a
1986
JESS86b *
Jesshope C.
The Transputer - Microprocessor or Systems Building Block
in BCS86a
1986
JOHN77a
Johnson S.D.
An Interpretive Model For A Language Based On Suspended Construction
M.S. Thesis, Indiana Univ, Bloomington, In.
1977
JOHN81a
Johnsson T.
Detecting When Call By Value can be Used Instead of Call By Need
LPM Memo 14, Chalmers Inst., Sweden
October 1981
JOHN83a
Johnsson T.
The G-Machine. An Abstract Machine for Graph Reduction
Declarative Programming Workshop , University College London, April 1983
JOHN86a *
Johnsson T.
Attribute Grammars and Functional Programming
86-02-20
JONE82a *
Jones N.D. & Muchnick S.S.
A Fixed-Program Machine for Combinator Expression Evaluation
Proc. of ACM LISP Conf 1982 p11-20
JONE83a *
Jones S.B.
Abstract Machine Support For Purely Functional Operating Systems
Technical Monograph PRG-34, Programming Research Group, Oxford Univ.
August 1983
JONK81a *
Jonkers H.B.M.
Abstract Storage Structures
Mathematisch Centrum iw 158/81
1981
KAHN77a *
Kahn G. & MacQueen D.B.
Coroutines and Networks of Parallel Processes
IFIP 77, ed. Gilchrist B. , pp 993-998
North Holland
1977
KAMI84a *
Kamimura T. & Tang A.
Total Objects of Domains
Theoretical Computer Science, pp 275-288
1984
KARI84a *
Karia R.J.
Compiling a Functional Language into Combinators for a Reduction Machine
Computer Science Research Memo no 111
G.E.C. Hirst Research Centre, Computer Science Research Group
6th March 1984
KARP66a *
Karp R.M. & Miller R.E.
Properties of a Model for Parallel Computations : Determinacy, Termination,
Queueing
SIAM Journal on Applied Mathematics, Vol 14, no 6
pp 1390-1411
November 1966
KATE84a
Katevenis M.G.H. & Sherburne R.W. & Patterson D.A. & Sequin C.H.
The RISC II Micro-Architecture
Journal of VLSI and Computer Systems, 1(2)
1984
KATU83a
Katuta T. & Miyazaki N. & Shibayama S. & Yokota H. & Murukami K.
A Relational Database Machine "Delta" - IPSJ Translation
ICOT Research Center, Technical Memorandum TM-0008
May 1983
KAUB82a
Kaubisch W.H. & Hoare C.A.R.
Discrete Event Simulation Based on Communicating Sequential Processes
in BROY82a, pp 625-642
1982
KELL79a
Keller R.M. & Lindstrom G. & Patil S.
A Loosely Coupled Applicative Multi-Processing System
AFIPS Conference Proceedings
June 1979
KELL80a
Keller R.M. & Lindstrom G. & Patil S.
Data-Flow Concepts for Hardware Design
In IEEE Compcon (VLSI - New Architectural Horizons)
February 1980
KELL80b
Keller R.M. & Lindstrom G.
Hierarchical Analysis of a Distributed Evaluator
1980 Int. Conf. on Parallel Processing, IEEE
August 1980
KELL80c
Keller R.M.
Some Theoretical Aspects of Applicative Multiprocessing
In LNCS Proc. Mathematical Foundations of Computer Science
1980
KELL80d
Keller R.M. & Lindstrom G.
Parallelism in Functional Programming Through Applicative Loops
Document 1980
KELL80e
Keller R.M.
Data Structuring in Applicative Multiprocessing Systems
Proc. 1980 LISP Conf. p196-202
KELL81a
Keller R.M. & Yen W.J.
A Graphical Approach to Software Development Using Function Graphs
Proc. Compcon 1981, IEEE
1981
KELL81b
Keller R.M. & Lindstrom G.
Applications of Feedback in Functional Programming
Symp. on Functional Langs. and Computer Arch., Chalmers Univ.
June 1981
KELL82a *
Keller R.M. & Sleep R.S.
Applicative Caching
Document, Dept. of Computer Science, University of Utah,July 1982
KELL84a
Keller R.M. & Lin F.C.H.
Simulated Performance of a Reduction-Based Multiprocessor
Computer 17(7), July 1984
KELL85a
Keller R.M.
FEL (Function-Equation Language) Programmers Guide
AMPS Technical Memorandum No 7, April 1985
KELL85b
Keller R.M. & Lin F.C.H. & Badovinatz
The Rediflow Simulator
Internal Memorandum, April 1985
KELL85c
Keller R.M. & Lindstrom G.
Approaching Distributed Database Implementations Through Functional
Programming Concepts
Proc. 5th Int. Conf. on Distributed Computing Systems, IEEE, Denver
May 1985
KELL85d
Keller R.M.
Distributed Computation by Graph Reduction
Systems Research
Pergammon Press
March 1985
KENN82a *
Kennaway J.R. & Sleep M.R.
Expressions as Processes
Proc. of ACM LISP Conf 1982 p21-28 1982
KENN82b *
Kennaway J.R. & Sleep M.R.
Applicative Objects as Processes
3rd Int. Conf. on Distributed Computing Systems, Miami, Oct 1982
KENN82c *
Kennaway J.R. & Sleep M.R.
Director Strings as Combinators
Document, Computer Studies Centre, University of East Anglia,July 1982
KENN82d *
Kennaway J.R.
The Complexity of a Translation of Lambda-Calculus to Combinators
Document, Computer Studies Centre, University of East Anglia,June 1982
KENN83a *
Kennaway J.R. & Sleep M.R.
Novel Architectures for Declarative Languages
Software & MicroSystems Vol 2 No 3 p59-70 June 1983
KENN83b *
Kennaway J.R. & Sleep M.R.
Syntax and Informal Semantics of DyNe, a Parallel Language
Document, Computer Studies Centre, University of East Anglia, Nov 1983
KENN84a
Kennaway J.R.
An Outline of Some Results of Staples on Optimal Reduction orders
in Replacement Systems
Internal Report CSA/19/1984
Declarative Systems Architecture Group-4
Univ. of East Anglia
20 March 1984
KENN84b
Kennaway J.R. & Sleep M.R.
Efficiency of Counting Director Strings
Internal Report CSA/14/1984
Declarative Systems Architecture Group-2
Univ of East Anglia
May 17 1983
1984
KENN84c
Kennaway J.R. & Sleep M.R.
Counting Director Strings (Draft)
unpublished
Univ of East Anglia
October 31, 1984
KENN85a
Kennaway J.R. & Sleep M.R.
A Denotational Semantics for First-Class Processes (Draft)
Univ of East Anglia
Submitted for publication
August 1985
KENN86a *
Kennaway J.R.
Recursive Normalising L-Strategies For Combinatory Reduction Systems
School of Information Systems, University of East Anglia
March 21, 1986
KIEB81a
Kieburtz R.B. & Shultis J.
Transformations of FP Program Schemes
Proc. ACM Conf. on Functional Programming Languages and Computer
Architecture, New Hampshire, pp 41-48
October 1981
KIEB85a
Kieburtz R.B.
The G-Machine: A Fast Graph-Reduction Evaluator
Oregon Graduate Center Technical Report CS/E-85-002
1985
KITA83a
Kitakami H. & Furukawa K. & Takeuchi A. & Yokota H. & Miyachi T. & Kunifuji S.
A Knowledge Assimilation Method for Logic Databases
( Also in "Proceedings of International Symposium on Logic Programming",
Atlantic City, U.S.A., 1984, IEEE Computer Society Press )
( Also in New Generation Computing, Vol 2, No 4, 1984 )
ICOT Research Center, Technical Report TR-025
September 1985
KITA83b
Kitakami H. & Kunifuji S. & Miyachi T. & Furukawa K.
A Methodology for Implementation of a Knowledge Acquisition System
( Also in "Proceedings of International Symposium on Logic Programming",
Atlantic City, U.S.A., 1984, IEEE Computer Society Press )
ICOT Research Center, Technical Report TR-037
December 1983
KITA83c
Kitakami H. & Kunifuji S. & Miyachi T. & Furukawa K.
A Methodology for Implementation of a Knowledge Acquisition System
ICOT Research Center, Technical Memorandum TM-0024
August 1983
KITS84a
Kitsuregawa M. & Tanaka H. & Moto-oka T.
Relational Algebra Machine GRACE
Faculty of Eng, Dept of Information Eng, Univ of Tokyo
KLEE50a
Kleene S.C.
Introduction to Metamathematics
Von Nostrand, Princeton, 1950
KLUG79a
Kluge W.E.
The Architecture of A Reduction Machine Hardware Model
Gesellschaft Fur Mathematik und Datenverarbeitung MBH Bonn
Tech Rep ISF-Report 79.03,
August 1979
KLUG83a *
Kluge W.E.
Cooperating Reduction Machines
IEEE Transactions on Computers, vol c-32, no 11
pp 1002-1012
November 1983
KNUT70a
Knuth D.E. & Bendix P.B.
Simple Word Problems in Universal Algebra
in "Computational Problems in Abstract Algebra" (ed. Leech J.)
pp 263-297
Pergamon Press
1970
KOND84a
Kondou H.
Plan for Constructing Knowledge Architecture
ICOT Research Center, Technical Report TR-078
September 1984
KOTT75a
Kott L.
About a Transformation System - A Theoretical Study
Proc. 3rd Symposium on Programming, Paris
1975
KOWA74a
Kowalski R.
Predicate logic as a Programming Language
Proc. IFIP, pp 569-574
1974
North Holland
KOWA79a
Kowalski R.
Logic for Problem Solving
North Holland 1979
KOWA79b
Kowalski R.
Algorithm=Logic+Control
CACM Vol 22 No 7 p424-436 July 1979
KOWA80a
Logic as A Computer Language
Infotech State of the Art Conference on Software Development Techniques
1980
KOWA82a
Kowalski R.
Logic Programming
Department of Computing, Imperial College
(Later Presented at IFIP 83, pp 133- 145, North Holland)
KOWA85a
Kowalski R.
The Relationship Between Logic Programming and Logic Specification
in HOA85a
1985
KUCH84a *
Kucherov G.A.
An Algorithm To Recognize Sufficient Completeness Of Algebraic
Specification Of An Abstract Data Type
Programming and Computer Software, 10, pp 161-168
1984
KULK86a *
Kulkarni K.G. & Atkinson M.P.
EFDM: Extended Functional Data Model
Computer Journal, Vol 29, no 1, pp 38-46
1986
KUNI82a
Kunifuji S. & Yokota H.
PROLOG and Relational Data Bases For Fifth Generation Computer Systems
( Also in "Proceedings of CERT Workshop on Logical Bases for Databases",
France, 1982 )
ICOT Research Center, Technical Report TR-002
September 1982
KUNI83a
Kunifuji S. & Enomoto H. & Yonezaki N. & Saeki M.
Paradigms of Knowledge Based Software System and Its Service Image
( Also in Third Seminar on Software Engineering, Florence, 1983 )
ICOT Research Center, Technical Report TR-030
November 1983
KUNI84a
ed. Kunii T.L.
VLSI Engineering
Springer Verlag 1984
KURO84a
Kurokawa T. & Tojyo S.
Coordinator - the Kernel of the Programming System for the Personal
Sequential Inference Machine (PSI)
ICOT Research Center, Technical report TR-061
April 1984
KUSA84a *
Kusalik A.J.
Serialization of Process Reduction In Concurrent Prolog
New Generation Computing 2, pp 289-298
Springer-Verlag
1984
LAMP81a
Lampson B.W. & Pier K.A.
A Processor for a High-Performance Personal Computer
CSL-81-1 , Xerox PARC, Jan 1981
LAMP81b
Lampson B.W. & McDaniel G.A. & Ornstein S.M.
An Instruction Fetch Unit for a High-Performance Personal Computer
CSL-81-1 , Xerox PARC, Jan 1981
LAND63
Landin P.J.
The Mechanical Evaluation of Expressions
Computer Journal 6(4), pp 308-320
(also see A Lambda Calculus Approach in
"Advances in Programming and Non-Numerical Computation"
ed. Fox L., p 67, Pergammon Press, Oxford, 1966)
1963
LAND65
Landin P.J.
A Correspondence Between Algol 60 and Church's Notation
CACM 8, p89
1965
LAND66
Landin P.J.
The Next 700 Programming Languages
CACM Vol 9, no 3, pp 157-166, march
1966
LASS85a *
Lassez J. -L. & Maher M.J.
Optimal Fixedpoints of Logic Programs
Theoretical Computer Science 39, pp 15-25
1985
LEHM81
Lehman M.M.
The Environment of Program Development and Maintenance-
Programs, Programming and Programming Support
Report 81/2
Dept of Computing, Imperial College
1981
LESC83a *
Lescanne P.
Behavioural Categoricity of Abstract Data Type Specifications
Computer Journal, Vol 26, no 4, pp 289-292
1983
LI84a
Li Deyi J.
A Prolog Database System
New York : John Wiley and Sons 1984
LIEB80
Lieberman H. & Hewitt C.E.
A Real Time Garbage Collector That Can Recover Temporary Storage Quickly
AI Memo 569, MIT Lab, Cambridge, 1980
LIEB83
Lieberman H. & Hewitt C.
A Real-Time Garbage Collector Based on the Lifetimes of Objects
CACM Vol 26 No. 6 p419-429 ,June 1983
LIND81
Lindstrom G. & Wagner R.
Incremental Recomputation on Data-Flow Graphs
Symposium on Functional Langs and Comp. Arch., Goteborg Univ.
1981
LIND83
Lindstrom G. & Hunt F.E.
Consistency and Currency in Functional Databases
Proc. Infocom 83, IEEE, San Diego
April 1983
LIND84a
Lindstrom G.
OR-Parallelism on Applicative Architectures
In Proc. 2nd International Logic Programming Conf., Uppsala Univ.
July 1984
LIND84b
Lindstrom G. & Panangaden P.
Stream-Based Execution of Logic Programs
Proc. 1984 Int'l Symp. on Logic Programming
February 1984
LIND85a
Lindstrom G.
Functional Programming and The Logical Variable
In Symposium on Principles of Programming Languages, ACM
January 1985
LIND85b
Lindstrom G.
Rule-Based Programming on Fifth Generation Computers
Proc. A.I. and Advanced Computer Technology Conf., Long Beach
June 1985
LINS85a *
Lins R.D.
The Complexity of a Translation of Lambda-Calculus to Categorical Combinators
University of Kent Computing Laboratory Report No 27
April 1985
LINS85b *
Lins R.D.
A New Formula For The Execution of Categorical Combinators
University of Kent Computing Laboratory Report No 33
November 1985
LINS85c *
Lins R.D.
On The Efficiency of Categorical Combinators as a Rewriting System
University of Kent Computing Laboratory No 34
November 1985
LINS85d
Lins R.D.
A New Way of Introducing Constants in Categorical Combinators
Privately Circulated
Computer Laboratory, Univ of Kent
1985
LOEC84a *
Loeckx J. & Sieber K.
The Foundations of Program Verification
Wiley-Teubner Series in Computer Science
John Wiley and Sons
1984
LONG76a *
Longo G. & Venturini Zilli M.
A Theory of Computation With an Identity Discriminator
Proceedings 3rd International Colloquium on Automata Languages and Programming
pp 147-167
Edinburgh University Press, 1976
MACQ84a *
MacQueen D.
Modules for Standard ML
Proceedings of 1984 ACM Symposium on Lisp and Functional Programming
Austin, Texas
pp 198-207
1984
MAGO79a
Mago G.A.
A Network of R Microprocessors To Execute Reduction Languages
Int. Journal of Computer and Information Sciences
Vol 8 no 5 and Vol 8 no 6
1979
MAGO80a
Mago G.A.
A Cellular Computer Architecture For Functional Programming
Proc. IEEE Compcon , pp 179-187
1980
MAIB85a *
Maibaum T.S.E.
Database Instances, Abstract Data Types and Database Specification
Computer Journal, Vol 28, no 2, pp 154-161
1985
MAL84a
Malpas John & O'Leary Kathy
Declarative Languages Under Unix
Microsystems, August 1984, page 94
MAL85a
Malpas John
Prolog as a unix system tool
unix/world vol II no 6 july '85 pp 48-53
MANNA73a
Manna Z. & Ness S. & Vuillemin J.
Inductive Methods For Proving Properties Of Programs
CACM , August, 1973
MANNA82a
Manna Z.
Verification of Sequential Programs: Temporal Axiomatization
in BROY82a, pp 53-101
1982
MANNI85a *
Mannila H. & Mehlhorn K.
A Fast Algorithm For Renaming a Set of Clauses as a Horn Set
Information Processing Letters, Vol 21, No 5, pp 269-272
November 1985
MANU83a *
Manuel T. & Evanczuk S.
Commercial Products Begin to Emerge From Decades of Research
Electronics, November 3, 1983, pp 127-131
1983
MANU83b *
Manuel T.
Lisp and Prolog Machines are Proliferating
Electronics, November 3, 1983, pp 132-137
1983
MARK77a
Markusz Z.
How to Design Variants of Flats Using Programming Logic-PROLOG based
on Mathematical Logic
Information Processing 77, North Holland 1977
MARK85a *
Markusz Z. & Kaposi A.A.
Complexity Control in Logic-Based Programming
Computer Journal, Vol 28, no 5, pp 487-495
1985
MART85
Martin-Lof P.
Constructive Mathematics and Computer Programming
in HOA85a
1985
MARU84a
Maruyama F. & Mano I. & Hayashi K. & Kakuta T. & Kawado N. & Uehara T.
Prolog-Based Expert System For Logic Design
ICOT Research Center, Technical Report TR-058
April 1984
MATT70
Mattson R.L. & Gecsei J. & Slutz D.R. & Traiger I.L.
Evaluation Techniques for Storage Heirarchies
IBM Syst. J. Vol9 1970 p78-117
MAY83a
May D. & Taylor R.
OCCAM
INMOS Limited, 1983
MAY83b
May D.
OCCAM
ACM Sigplan Notices Vol 18 no 4, pp 69-79
1983
McCAB83
McCabe F.G.
Abstract PROLOG Machine- a Specification
Document Jan. 1983
McCAB85a
McCabe F.G.
Lambda PRLOG
Internal Report, Department of Computing, Imperial College
1985
McCAR60
McCarthy J.
Recursive Functions Of Symbolic Expressions And Their Computation By
Machine, Part 1
CACM, April, 1960
McCAR62
McCarthy J. et al
LISP 1.5 Programmers Manual
MIT Press, 1962
McCAR63
McCarthy J.
A Basis For A Mathematical Theory of Computation
In Computer Programming and Formal Systems
(eds. Brafford P. & Hirschuerg D.)
North Holland
1963
McCAR78
McCarthy J.
The History of LISP
Proceedings of SIGPLAN History of Programming Languages Conference, 1978
MEIR82a *
Meira S.L.
Sorting Algorithms in KRC Implemented in a Functional Programming System
University of Kent Computing Laboratory Report No 14
August 1982
MEIR83
Meira S.R.L.
Sorting Algorithms in KRC: Implementation, Proof and Performance
Computing Laboratory Rep no. 14, Univ. of Kent at Canterbury,
1983
MEIR84a
Meira S.R.L.
A Linear Applicative Solution To The Set Union Problem
Computing Laboratory Tech Rep. no 23,
Univ. of Kent at Canterbury
1984
MEIR84b *
Meira S.R.L.
Optimized Combinatoric Code for Applicative Language Implementation
Computing Laboratory Tech Rep. no 20,
Univ. of Kent at Canterbury
April 1984
MEIR85a *
Meira S.R.L.
On the Efficiency of Applicative Algorithms
Phd Thesis
Univ. of Kent at Canterbury
March 1985
MEIR85b *
Meira S.L.
A Linear Applicative Solution for The Set Union Problem
University of Kent Computing Laboratory Report No 28
May 1985
MEIR85c *
Meira S.L.
A Linear Applicative Solution for the Set Union Problem
Information Processing Letters 20, pp 43-45
2 January 1985
MESE85a *
Meseguer J. & Goguen J.A.
Deduction with Many-Sorted Rewrite
Center for the Study of Language and Information, Stanford University
Report No CSLI-85-42
December 1985
MILN78
Milner R.
A Theory of Type Polymorphism in Programming
J. Computer System Sci 17, pp 348-375
1978
MILN84a *
Milner R.
A Proposal For Standard ML
Proceedings of 1984 ACM Symposium on Lisp and Functional Programming
Austin, Texas
pp 184-197
1984
MILN85a
Milner R.
The Use of Machines to Assist in Rigorous Proof
in HOA85a
1985
MISH84a
Mishra P. & Reddy U.S.
Static Inference of Properties of Applicative Programs
In 11th Annual Symp. on Principles of Programming Languages, ACM
January 1984
MISH85a
Mishra P. & Reddy U.S.
Declaration-Free Type Checking
In Symp. on Principles of Programminmg Languages, ACM
January 1985
MIYA82a
Miyazaki N.
A Data Sublanguage Approach to Interfacing Predicate Logic Languages
and Relational Databases
ICOT TM-001
MIYA84a
Miyazaki N. & Kakuta T. & Shibayama S. & Yokota H. & Murukami K.
An Overview of Relational Database Machine Delta
ICOT Research Center, Technical Report TR-074
August 1984
MIZO85a
Mizoguchi F. & Furukawa K.
Guest Editors' Preface
New Generation Computing Vol 3 No 4, pp 341-344
1985
MOIT85a *
Moitra A.
Automatic Construction of CSP Programs From Sequential Non-Deterministic
Programs
Science of Computer Programming 5 , p277-307
1985
MOKH84a
Mokhoff N.
Parallelism Makes a Strong Bid for Next Generation Computers
Computer Design Vol 23 No 10 Sept 1984
MOON84a *
Moon D. A.
Garbage Collection in a Large LISP System
Proc. of 1984 ACM Conf. on Lisp and Functional Programming
Austin, Texas
pp 235-246
1984
MOOR80a
Moor I. & Darlington J.
A Formal Synthesis Of An Efficient Implementaion For An Abstract Data Type
Internal Report, Dept of Computing, Imperial College
1980
MOOR85a *
Moore R.C.
Possible-World Semantics for Autoepistemic Logic
Center for the Study of Language and Information, Stanford University
Report No CSLI-85-41
December 1985
MORR73
Morris J.H.
Types are not Sets
Proc. ACM Symp. on Princ. of Prog. Langs., pp 120-124, october
1973
MORR80a
Morris J.H. & Schmidt E. & Wadler P.
Experience with an Applicative String Processing Language
Proc. 7th Annual SIGACT-SIGSOFT Symp. on Princ. of Prog. Langs,
Las Vegas, Nevada, 1980
1980
MORR80b
Morris F.L. & Schwarz J.S.
Computing Cyclic List Structures
Proc. of 1980 LISP Conf. p144-153
MORR82a
Morris J.H.
Real Programming in Functional Languages
in DARL82a
1982
MORR82b
Morris J.M.
A General Axiom of Assignment
in BROY82a, pp 25-34
1982
MORR82c
Morris J.M.
Assignment and Linked Data Structures
in BROY82a, pp 35-41
1982
MORR82d
Morris J.M.
A Proof of the Schorr-Waite Algorithm
in BROY82a, pp 43-51
1982
MOTO83a
Moto-oka T.
Overview of the Fifth Generation Computer System Project
Proc. 10th Annual International Symposium on Computer Architecture,
SIGARCH 11(3) ,p417-422
1983
MUKA83a
Mukai K. & Furukawa K.
An Ordered Linear Resolution Theorem Proving Program in Prolog
( Also in "Proceedings of IJPS National Conference", 1983 )
ICOT Research Center, Technical Memorandum TM-0027
September 1983
MUKA85a
Mukai K. & Yasukawa H.
Complex Indeterminates in Prolog and its Application to Discourse Models
New Generation Computing, Vol 3, No 4, pp 441-466
1985
MURA83
Murakami K.
A Relational Data Base Machine: First Step to Knowledge Base Machine
Proc. 10th Annual International Symposium on Computer Architecture,
SIGARCH 11(3) ,p423-425, Sweden
1983
Also ICOT Research Center, Technical Report TR-012
( co authors cited here : Katuta T. & Miyazaki N. & Shibayama S. & Yokota H. )
May 1985
MUSS77
Musser D.R.
A Data Type Verification System based on Rewrite Rules
Proc. of 6th Texas Conference on Computing Systems, Austin, Texas, November
1977
MYCR81a *
Mycroft A.
Abstract Interpretation and Optimising Transformations for
Applicative Programs
PhD Thesis, Univ of Edinburgh
1981
MYCR83
Mycroft A. & Nielson F.
Strong Abstract Interpretation Using Power Domains (Extended Abstract)
Proc 10th Int. Colloq. on Automata, Languages and Programming
Barcelona, Spain, 18-22 July, 1983, pp 536-547
Springer Verlag LNCS no 154 (ed. Diaz J.)
1983
------------------------------
End of AIList Digest
********************
∂02-May-86 0446 LAWS@SRI-AI.ARPA AIList Digest V4 #110
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 2 May 86 04:46:30 PDT
Date: Thu 1 May 1986 22:45-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #110
To: AIList@SRI-AI
AIList Digest Friday, 2 May 1986 Volume 4 : Issue 110
Today's Topics:
Queries - Common LISP Coding Standards & Neural Networks,
Literature - Connection Machine Article,
AI Tools - Expert Systems Software for MS-DOS,
Anthology - Genetic Algorithms and Simulated Annealing,
Linguistics - OpEd & Italo Calvino AI Project & Trademarks
----------------------------------------------------------------------
Date: Thu, 1 May 86 8:49:14 EDT
From: Marty Hall <hall@hopkins-eecs-bravo.ARPA>
Subject: RE: Common LISP coding standards
Hugh Mcguire writes:
> Perhaps Marty Hall was seeking some guide to LISP style, similar to
> Legard's (et al.'s) *Pascal with Style*; I certainly would find such
> useful, and perhaps others would also...
Yes! That is exactly what I am looking for, and so far have recieved
only meager replies. The type of points Hugh mentioned are exactly the
types of questions we want to have standards on.
Anyone have anything?
-Marty Hall
Arpa: hall@hopkins
uucp ...seismo!umcp-cs!jhunix!ins←amrh
------------------------------
Date: 29 Apr 86 23:59:35 GMT
From: ihnp4!mhuxt!js2j@ucbvax.berkeley.edu (sonntag)
Subject: neural networks
A recent issue of 'Science' had an article on 'neural networks', which,
apparently consist of a highly interconnected repetition of some sort of
simple 'nodes' with an overall positive feedback and some sort of
randomness thrown in for good measure. When these networks are 'powered up',
the positive feedback quickly forces the system into a stable state, with
each node either 'on' or 'off'. The article claimed that some
simulations of moderate sized (10K nodes?) networks had been done, and
reported some rather amazing results. For one thing, it was discovered
that if just 50 out of 10k nodes are preset to a particular value, the
network has just ~100 very similar stable states, out of 10**1000 possibilities.
They also claimed that one such system was able to arrive at a 'very good'
solution to arbitrary 'traveling salesman' problems! And that another
network (hooked to a piece of equipment which could produce phonemes, and
presumably some kind of feedback) had been 'trained' to read english text
reasonably well. They said incredibly little about the actual details of
how each node operates, unfortunately.
So how about it? Has anybody else heard of these things? Is this
really a way of going about AI in a way which *may* be similar to what
brains do? Just exactly what algorithms are the nodes implementing, and
how do you provide input and get output from them? Does anyone know
where I could get more information about them?
Jeff Sonntag
ihnp4!mhuxt!js2j
--
Jeff Sonntag
ihnp4!mhuxt!js2j
[I will send Jeff a copy of our January discussion on the
connectionist speech learning project. -- KIL ]
------------------------------
Date: Thu 1 May 86 14:29:06-CDT
From: Jonathan Slocum <AI.Slocum@MCC.ARPA>
Subject: Connection Machine article
Hillis has written a book entitled "The Connection Machine."
It's generally available: I purchased a copy recently.
------------------------------
Date: Thu 1 May 86 14:49:50-CDT
From: CMP.BARC@R20.UTEXAS.EDU
Subject: Re: Expert systems software for MS-DOS
I hope my Macintosh forgives me for this, but here goes!
I have a few additions to Paul Chisholm's list of expert systems products
for IBM PC's. Since there were so many items, I decided to send them
directly to AIList, as well as to Mr. Chisholm. I have included a number
of AI language implementations, including Lisp's since he isolated xlisp
for some reason. Also I've included a few decision support systems,
which aren't really AI or expert system by-products. But they often do
just as much as expert system shells, and some of the vendor's are even
marketing them as AI, so what the heck.
I haven't read Mr. Chisholm's list all that carefully, but I did notice
some minor errors: Personal Consultant Plus is written in PC Scheme
(not in IQLisp); Mountain View Press's Expert is now known as Expert-2;
what he refers to as expert systems are in fact expert system shells or
development tools. Some specific expert systems are being marketed,
however (a couple of which are on my list below).
The names, addresses, phone numbers, and especially prices are not
guaranteed to be free from typos, line noise, or obsolecence. I have
litlle experience or further information on any of these packages. So
please don't address questions to -- call the companies.
Now, what you've all been waiting for:
AL/X: Expert system shell
ALCS: Expert system shell
Inference Manager: Expert system shell, 500 pounds
Intelligent Terminals Ltd
15 Canal St.
Oxford, UK OX26BH
Also:
George House
36 North Hanover St.
Glasgow, Scotland G1 2AD
041-552-1353
(These might be available from Jeffrey Perrone & Associates)
apes: Expert system shell [micro-Prolog], $250
Programming Logic Systems
312 Crescent Dr.
Milford, CT 06460
(203) 877-7988
ERS: Expert system shell
PAR Technology Corp.
220 Seneca Turnpike
New Hartford, NY 13413
GEN-X: Expert system shell
General Electric Research and Development Center
Schenectady, NY 12345
K:base: Expert system shell
GCLisp (Golden Common Lisp), $495
Gold Hill Computers
163 Havard St.
Cambridge, MA 02139
(404) 565-0771
M.1A: Expert system shell, $2000
Teknowledge Inc.
525 University Ave.
Palo Alto, CA 94301
415-327-6640
Savior: Expert System Shell, 3000 pounds
ISI Limited
11 Oakdene Road
Redhill, Surrey, UK RH16BT
(0737)71327
SeRIes PC: Expert system shell, $15,000
SRI International
Advanced Computer Systems Division
333 Ravenswood Avenue
Menlo Park, CA 94025
(415) 859-2859
TOPSCI: Expert system shell, $75/$175
Dynamic Master Systyems Inc.
PO Box 566456
Atlanta, GA 30356
(404) 565-0771
Micro In-Ate: Expert system shell for fault diagnosis, $5000
Automated Reasoning Corporation
290 West 12th St., Suite 212-252
New York, NY 10014
(212) 206-6331
TK!Solver: Symbolic math expert, $399
Lotus/Software Arts
27 Mica Lane
Wellesley, MA 02181
(617) 237-4000
Comprehension: Expert system for thought analysis, $75
Thunderstone Corp.
PO Box 839
Chesterland, OH 44026
(216) 729-1132
Arborist: Decision support, $595
PC Scheme: Lisp, $95
Texas Instruments
PO Box 809063
Dallas, TX 75380-9063
(800) 527-3500
Expert Choice: Decision support, $495
Decision Support Software Inc.
1300 Vincent Place
McLean, VA 22101
(703) 442-7900
Lightyear: Decision support, $495
Lightyear, Inc.
1333 Lawrence Expwy., Bldg. 210
Santa Clara, CA 95051
(408) 985-8811
Byso Lisp, $125
Levien Instrument Co.
Sittlington Hill
PO Box 31
McDowell, VA 24458
(703) 396-3345
Q'NIAL: Nested Interactive Array Language, $395/$995
Starwood Corporation
PO Box 160849
San Antonio, TX 78280
(512) 496-8037
Methods: SmallTalk, $250
Digitalk Inc.
5200 West Century Blvd.
Los Angeles, CA 90045
(213) 645-1082
IQLisp, $175
Integral Quality
6265 Twentieth Avenue (or POB 31970)
Seattle, WA 98115
(206) 527-2918
LISP/80, $40
Software Toolworks
15233 Ventura Blvd., Suite 1118
Sherman Oaks, CA 91403
(818) 986-4885
LISP/88, $50
Norell Data Systems
PO Box 70127
3400 Wilshire Blvd
Los Angeles, CA 90010
(213) 748-5978
muLisp-85, $250
Microsoft Corp.
10700 Northup Way
Box 97200
Bellevue, WA 98004
(206) 828-8080
PSL (Portable Standard Lisp), Distribution costs ($75?)
The Utah Symbolic Computation Group
Department of Computer Science
University of Utah
Salt Lake City, UT 84112
TLC-Lisp, $250
The Lisp Co.
PO Box 487
Redwood Estates, CA 95044
(408) 426-9400
UO-Lisp, $150
Northwest Computer Algorithms
PO Box 90995
Long Beach, CA 90809
(213) 426-1893
Waltz Lisp, $169
ProCode International
15930 SW Colony Place
Portland, OR 97224
(503) 684-3000
There are some reasonable reviews of AI tools and languages for the
IBM PC in "Computer Language", July and August, 1985. The October 1985
issue of "Expert Systems" contains surveys and descriptions of expert
system shells and languages on micros. The books "Understanding AI"
(H.C. Mishkoff) and "Expert Systems: AI in Business" (P. Harmon and
D. King) also have useful information about expert system products on
the IBM PC.
Dallas Webster
CMP.BARC@R20.UTexas.Edu
{ihnp4 | seismo | ctvax}!ut-sally!batman!dallas
------------------------------
Date: Thu 1 May 86 11:54:01-PDT
From: Matt Heffron <BEC.HEFFRON@USC-ECL.ARPA>
Subject: Another PC Expert System Application
SpinPro (tm)
$2500
written in GCLISP
Plans Ultracentrifugation experiments for bio-tech lab
Beckman Instruments, Inc.
Spinco Division
(415)-857-1150 (sales info)
(714)-961-3728 (technical info) Matt Heffron
------------------------------
Date: Thu 1 May 86 15:50:52-EDT
From: DDAVIS@G.BBN.COM
Subject: Anthology - Genetic Algorithms and Simulated Annealing
GENETIC ALGORITHMS AND SIMULATED ANNEALING,
Call For Papers
The Pitman Series of Research Notes in Artificial Intelligence
(Derek Sleeman and N.S. Sridharan, Senior Editors) will publish a
volume of papers entitled "Genetic Algorithms and Simulated
Annealing" early in 1987. The volume will be edited by David
Davis of Bolt Beranek and Newman and will be refereed by experts
in the fields of genetic algorithms and simulated annealing.
Submissions to the volume are invited. Papers should be no more
than 20 pages in length, should be primarily concerned with one
or both of the two fields of research, and should conform to
accepted editorial standards. In order to submit a paper,
mail four copies to
(Lawrence) David Davis
BBN Laboratories Incorporated
10 Moulton Street
Cambridge, MA 02238.
In order to prepare and publish the volume on time, we will not
be able to consider papers postmarked after September 30, 1986.
For further information, contact David Davis at (617) 497-3120,
or send electronic mail to ddavis@bbng.
------------------------------
Date: Thu, 1 May 86 08:30:38 cdt
From: porter@fall.cs.UTEXAS.EDU
Subject: Colloq at UTexas
It appears to be necessary to present an alternate opinion of the
colloquium presented by Sergio Alvarado at UTexas on April 22.
We do NOT agree with the evaluation written by Aaron Temin and
posted to this bulletin board on April 24. Mr. Temin is a
graduate student in the computer sciences department. We believe
that his critical review was inaccurate.
Alvarado's colloquium reviewed his PhD dissertation research at UCLA
(under Michael Dyer). His research presents a computational model for
comprehension of arguments, such as those in letters to the editor
of a newspaper.
Alvarado's system, called OpEd, recognizes the structure of arguments
as a critical first step in their comprehension. For example,
Alvarado reviewed an example of an editorial by Milton Friedman which
argues that restriction of foreign imports will have negative consequences
for employment. From natural language input, OpEd recognizes
this editorial as an instance of the "plan achieves the opposite
of the desired effect" argument structure. OpEd uses bottom-up processing
to instantiate argument structures and top-down processing to
disambiguate interpretation using a (partially) instantiated structure.
The important contributions of Alvarado's research thus far
include a collection of general argument structures and a computational
model for recognition of a particular argument structure in text.
Alvarado is investigating the extension of this research to include
argument evaluation and teaching of argumentation skills.
In summary, Alvarado's research is an extension of "knowledge-rich"
NLP into a challenging domain. Significant results were obtained
and promising research directions illuminated.
Robert Simmons
Ben Kuipers
Bruce Porter
------------------------------
Date: 2 May 86 04:44:05 GMT
From: brahms!gsmith@ucbvax.berkeley.edu (Gene Ward Smith)
Subject: Italo Calvino AI project
I must apologize to Bandy for posting a genuine rumor to net.rumor, but
this is a real rumor I found on net.followup:
>I have it on good authority (although second-hand) that an entire
>*novel* was generated by computer. It was the result of a research
>project which aimed to "parameterize" an author's writing style. The
>study concentrated primarily on one author, Italo Calvino, and I have
>heard that the novel, "If on a winter's night a traveller", was actually
>published and marketed with Calvino's blessing.
[Jack Orenstein]
ucbvax!brahms!gsmith Gene Ward Smith/UCB Math Dept/Berkeley CA 94720
ucbvax!weyl!gsmith The Josh McDowell of the Net
------------------------------
Date: Thu, 1 May 86 12:10 EDT
From: Seth Steinberg <sas@BBN-VAX.ARPA>
Subject: Trademarks
Under trademark law Xerox is obligated to point out misuses of their
company name or they stand the chance of it legally falling into the
public domain. If this happens Cannon will be able to advertise their
copier as the Cannon(TM) Xerox Machine. I have been corrected at
restaurants when I order a Coke and they only have Pepsi or RC. There
was a particular consent agreement with Brigham's a little while ago.
Trademarks, as Shakespeare pointed out in Othello, have an intrinsic
worth and the value of something like Xerox runs in the hundreds of
millions. How much did Standard Oil spend on ads to tell you about
Exxon? Some trademarks such as zipper and aspirin have fallen into
common usage. When Aspirin fell into the common usage the company
(Sterling?) was given the trademark Bayer which originally belonged to
Bayer AG but had been seized during World War I as enemy property.
Seth
I would have signed this Bill Bulko but I know how people feel about
names.
------------------------------
Date: Thu 1 May 86 12:23:50-PDT
From: Rich Alderson <ALDERSON@SU-SCORE.ARPA>
Subject: "Xerox" vs. "xerox"?
Laws concerning trademark usage aside, De Smedt is perfectly
correct in pointing out that the verb 'to xerox', meaning to copy
on a dry-xerographic copier, and associated constructions ( a
xerox copy, etc.), are now in fact part of the language. [...]
Its no use, guys, you can't stop people using the word in the way
they want to. A dictionary which omitted 'to xerox' would not be
accurate.
It's interesting to note that at one time, "frigidaire" (no caps) was
considered to be a synonym for "refrigerator." Frigidaire, the
company, fought this in order not to lose trademark status. How often
does one hear this usage these days?
(Not to mention those not in the various computer-related fields who
STILL use "IBM" to mean "computer"...)
Rich Alderson
Alderson@Score.Stanford.EDU (=SU-SCORE.ARPA)
------------------------------
End of AIList Digest
********************
∂04-May-86 0035 LAWS@SRI-AI.ARPA AIList Digest V4 #111
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 May 86 00:35:38 PDT
Date: Sat 3 May 1986 22:15-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #111
To: AIList@SRI-AI
AIList Digest Sunday, 4 May 1986 Volume 4 : Issue 111
Today's Topics:
Bibliography - References #5
----------------------------------------------------------------------
Date: 9 Apr 86 13:23:16 GMT
From: allegra!mit-eddie!think!harvard!seismo!mcvax!ukc!dcl-cs!nott-cs!
abc@ucbvax.berkeley.edu (Andy Cheese)
Subject: Bibliography - References #5
NAKA85a
Nakamura K.
Book Review
"Introduction to Logic Programming" by C.J. Hogger, Academic Press, 290 pages,
1984
New Generation Computing, Vol 3, No 4, pp 487
1985
NAKA85b
Nakashima H. & DeGroot D.
Conference Report
A Report on 1985 International Symposium on Logic Programming
New Generation Computing, Vol 3, No 4, pp 488-489
1985
NATA86a*
Natarajan N.
A Distributed Synchronisation Scheme for Communicating Processes
Computer Journal, Vol 29, No 2, pp 109-117
April 1986
NIEL84a *
Nielson F.
Abstract Interpretation Using Domain Theory
Department of Computer Science, University of Edinburgh
Phd Thesis, CST-31-84
October 1984
NIEL *
Nielson F.
A Bibliography On Abstract Interpretation
NIPK85a *
Nipkow T.
Non-Deterministic Data Types: Models and Implementations
Dept of Comp Sci, Univ of Manchester, Technical Report UMCS-85-10-1
October 1985
NISH83a
Nishikawa H. & Yokota M. & Yamamoto A. & Taki K. & Uchida S.
The Personal Inference Machine (PSI) : Its design Philosophy and Machine
Architecture
( Also in "Proceedings of Logic Programming Workshop, '83", Portugal 1983 )
ICOT Research center, Technical report TR-013
June 1983
NIVA82a
Nivat M.
Behaviours of Processes and Synchronised Systems of Processes
in BROY82a, pp 473-550
1982
NORM80a
Norman A. et al
SKIM- The S,K,I Reduction Machine
Proc. LISP Conf.
1980
OHSU85a
Ohsuga S. & Yamauchi H.
Multi-Layer Logic - A Predicate Logic Including Data Structure as Knowledge
Representation Language
New Generation Computing, Vol 3, No 4, pp 403-439
1985
ONAI84a
Onai R. & Aso M. & Takeuchi A.
An Approach to a Parallel Inference Machine Based on Control-Driven and
Data-Driven Mechanisms
ICOT Research Center, Technical Report TR-042
January 1984
OSH84
ed. O'Shea T. & Eisenstadt M.
Artificial Intelligence Tools, Techniques and Applications
Harper & Row, Publishers, New York
1984
PATT81a *
Patterson D.A. & Sequin C.H.
RISC 1 : A Reduced Instruction Set VLSI Computer
Proc 8th International Symposium on Computer Architecture
SIGARCH News vol 9, no 3
pp 443-457
1981
PATT82a
Patterson D.A. & Sequin C.H.
A VLSI RISC
Computer Vol 15 No 9, pp 8-21, Sept 1982
PATT84a
Patterson D.A.
VLSI Systems Building: A Berkeley Perspective
Proc. Conf. on Advanced Research in VLSI, MIT
January 1984
PATT85
Patterson D.A.
Reduced Instruction Set Computers
CACM Vol 28, No 1
January 1985
PAUL84a *
Paulson L.C.
Constructing Recursion Operators in Intuitionistic Type Theory
Computing Laboratory, University of Cambridge
Technical Report no 57
October 1984
PAUL85a *
Paulson L.C.
Lessons Learned From LCF: A Survey of Natural Deduction Proofs
Computer Journal, Vol 28, no 5, pp 474-479
1985
PERE79a *
Pereira L.M. & Porto A.
Intelligent Backtracking and Sidetracking in Horn Clause Programs - The Theory
Universidade Nova de Lisboa, Report no 2/79
1979
PERE79b *
Pereira L.M.
Backtracking Intelligently in AND/OR Trees
Universidade Nova de Lisboa, report no 1/79
1979
PEYT82
Peyton Jones S.L.
An Investigation of the Relative Efficiencies of Combinators and
Lambda Expressions
Proc. of ACM LISP Conf 1982 p150-158
PEYT84
Peyton Jones S.L.
Directions in Functional Programming Research
in DUCE84
1984
PEYT85a
Peyton Jones S.L.
GRIP-a parallel graph reduction machine
Dept. of Computer Science, Internal Note 1665, grm.design v1.0, Jan 1985
PEYT85b *
Peyton Jones S.L.
Functional Programming Languages as a Software Engineering Tool
2nd December 1985
PEYT86a *
Peyton Jones S.L.
Parsing Distfix Operators
CACM, Vol 29, no 2, pp 118-122
February 1986
PEIR83
Pier K.A.
A Retrospective on the Dorado, A High Performance Personal Computer
ISL-83-1, Xerox PARC, 1983
PELE84a *
Peleg D.
Communication in Concurrent Dynamic Logic
CS84-15
Dept of Applied Mathematics, Weizmann Institute of Science, Israel
July 1984
PELE84b *
Peleg D.
Concurrent Dynamic Logic
CS84-14
Dept of Applied Mathematics, Weizmann Institute of Science, Israel
July 1984
PING84a
Pingali K. & Arvind
Efficient Demand-Driven Evaluation (I)
Lab. For Computer Science Technical Memo 242
September 1984
PING84b
Pingali K. & Arvind
Efficient Demand-Driven Evaluation (II)
Lab For Computer Science Technical Memo 243
November 1984
PLAI85a *
Plaisted D.A.
The Undecidability of Self-Embedding For Term Rewritng Systems
Information Processing Letters 20, pp 61-64
15 February 1985
PLES85a *
Pless E.
Die Ubersetzung von LISP in die reduktionsprache BRL
GMD 142
March 1985
PLOT76
Plotkin G.D.
A Powerdomain Construction
SIAM J. Comput. 5 3 pp 452-487
September 1976
PLOT82
Plotkin G.D.
A Power Domain For Countable Non-Determinism (Extended Abstract)
Proc 9th Int. Colloq. on Automata, Languages and Programming
Springer Verlag LNCS no 140, pp 418-428
(ed. Nielson M. & Schmidt E.M.)
1982
POON85
Poon E.K. & Peyton Jones S.L.
Cache Memories in a Functional Programming Environment
Dept. of Computer Science, Univ. College London, Internal Note 1680, Jan 1985
PPRG9
Persistent Programming Research Group
Procedures as Persistent Data Objects
Persistent Programming Research Report 9
PPRG11
Persistent Programming Research Group
PS-Algol Abstract Machine Manual
Persistent Programming Research Report 11
PPRG12
Persistent Programming Research Group
PS-Algol Reference Manual Second Edition
Persistent Programming Research Report 12
PRAM85a *
Pramanik S. & King C-T
Computer Journal, Vol 28, no 3, pp 264-269
1985
PRO84a
Prolog: A Tutorial/Review
Microsystems, January 1984, page 104
1984
PULL84a
Pull H.
A HOPE in HOPE Interpreter
BSc. Undergraduate Thesis, Department of Computing, Imperial College
1984
PYKA85a *
Pyka C.
Syntactic Analysis
Forschungstelle fur Informationswissenschaft und Intelligenz,
Universitat Hanburg
LOKI Report NLI - 4.1
November 1985
QUI60a
Quine W.V.O.
Word and Object
MIT Press, Cambridge, 1960
REDD84a
Reddy U.S.
Transformation of Logic Programs into Functional Programs
Proc. 1984 Int'l Symp. on Logic Programming
Feb 1984
REDD85
Reddy U.S.
On The Relationship between Logic and Functional Languages
In "Functional and Logic Programming"
(eds. DeGroot D. & Lindstrom G.)
Prentice-Hall
1985
REEV81a
Reeve M.
The ALICE Compiler Target Language
Document, Dept of Computing, Imperial College, May 1981
REEV81b
Reeve M.
An Introduction to the ALICE Compiler Target Language
Research Report, Dept of Computing, Imperial College ,July 1981
REEV85a *
Reeve M.
A BNF Description Of The Alice Compiler Target Language
1985
REYN72
Reynolds J.C.
Definitional Interpreters For Higher Order Programming Languages
Proc 25th ACM National Conf, pp 717-740
1972
RICH82
Richmond G.
A Dataflow Implementation of SASL
Msc Thesis, Dept of Comp Sci, Univ. of Manchester, October 1982.
ROBI65a
Robinson J.A.
A Machine Oriented Logic Based on The Resolution Principle
J. Ass. Comput. Mach. 12, pp 23-41
1965
ROBI77
Robinson J.A.
Logic: Form and Function
Edinburgh University Press
1979
ROBI83a *
Robinson J.A.
Logic Programming - Past, Present and Future
( Also in New Generation Computing, Vol 1, No 2, 1983 )
ICOT Research Center, Technical report TR-015
June 1983
ROSE85a
Rosenschein S.J.
Formal Theories of Knowledge in AI and Robotics
New Generation Computing, Vol 3, No 4, pp 345-357
1985
RUSS10,RUSS25a
Russell B. & Whitehead A.N.
Principia Mathematica
Cambridge University Press, 1910 & 1925
RYDE81a *
Rydeheard D.E.
Applications of Category Theory to Programming and Program Specification
Department of Computer Science, University of Edinburgh
Phd Thesis, CST-14-81
December 1981
RYDE85a *
Rydeheard D.E. & Burstall R.M.
The Unification of Terms: A Category-Theoretic Algorithm
Dept of Comp Sci, Univ of Manchester, Technical Report UMCS-85-8-1
August 1985
SAIN84a *
Saint-James E.
Recursion is More Efficient than Iteration
Proceedings of 1984 ACM Symposium on Lisp and Functional Programming
Austin, Texas
pp 228-234
1984
SAKA83a *
Sakai K. & Miyachi T.
Incorporating Native Negation into PROLOG
( Also in "Proceedings of RIMS Symposia on Software Science and Engineering",
1984, Springer-Verlag )
( Also in "Proceedings of Logic and Conferecne", Monash Univ., 1984 )
ICOT Research center, Technical Report TR-028
October 1983
SAKA84a
Sakai K.
An Ordering for Term Rewriting System
ICOT Research Center, Technical Report TR-062
April 1984
SAKA84b
Sakai H. & Iwata K. & Kamiya S. & Abe K. & Tanaka T. & Shibayama S &
Murukami K.
Design and Implementation of the Relational Database Engine
( Also in "Proceedings of FGCS 84", Tokyo, 1984 )
ICOT Research Center, Technical Report TR-063
April 1984
SAKA85a
Sakai T.
Intelligent Sensor
Preface for New Generation Computing Vol 3 No 4, 1985, pp 339-340
1985
SAME84a
Samet H.
The Quadtree and Related Hierarchial Data Structures
ACM Comp. Surveys Vol16, No 2, June 1984,p187-260
SAR82a
Sargeant J.
Implementation of Structured LUCID on a Data Flow Computer
MSc Thesis, Dept of Comp Sci, Univ. of Manchester, October 1982
SATO83a *
Sato M. & Sakurai T.
Qute: A Prolog/Lisp Type Language for Logic Programming
( Also in "Proceedings of 8th IJCAI", Karlsluhe, 1983 )
ICOT Research center, Technical Report TR-016
August 1983
SATO84a
Sato M. & Sakurai T.
Qute Users Manual
Dept. of Information Science, Faculty of Science, University of Tokyo
SATO84b *
Sato T. & Tamaki H.
Enumeration of Success Patterns In Logic Programs
Theoretical Computer Science, pp 227-240
1984
SCHM78a
Schmitz L.
An Exercise in Program Synthesis: Algorithms For Computing The
Transitive Closure of A Relation
Internal Report, Hochschule der Bundeswehr, Munich
1978
SCHM85a *
Scmittgen C. & Gerdts A. & Haumann & Kluge W. & Woitass
A System-Supported Workload Balancing Scheme for Cooperating Reduction
Machines
GMD Tech Rep
June 1985
SCHM85b *
Schmittgen C.
A Data Type Architecture for Reduction Machines
GMD 152
May 1985
SCHM85c *
Schmidt D.A.
Detecting Global Variables in Denotational Specifications
ACM Transactions on Programming Languages and Systems, Vol 7, no 2
pp 299-310
April 1985
SCHWA76a *
Scwartz J.
Event Based Reasoning - A System For Proving Correct Termination of Prorgams
Proceedings 3rd International Colloquium on Automata Languages and Programming
pp 131-146
Edinburgh University Press, 1976
SCHWA77a
Schwartz J.
Using Annotations To Make Recursion Equations Behave
Report No 43, Dept of A.I., Univ of Edinburgh
1977
SCHWE84a
Schweppe H.
Some Comments on Sequential Disk Cache Management for Knowledge Base Systems
ICOT Research Center, Technical Report TR-040
January 1984
SCOT70a
Scott D.S.
Outline of Mathematical Theory of Computation
Oxford University Programming Research Group
Tech Monograph no 2
1970
SCOT71a
Scott D. & Strachey C.
Towards a Mathematical Semantics for Computer Languages
1971 Symposium on Computers and Automata
Microwave Research Institute Proceedings, Vol 21
Polytechnic Institute of Brooklyn
1972
SCOT76
Scott D.S.
Data Types as Lattices
SIAM J.L. Computing 5, pp 522-587
1976
SCOT81
Scott D.
Lectures on a Mathematical Theory of Computation
Tech. Monograph PRG-19
Oxford Univ, Computing Lab, Programming Research Group
1981
SCOT82a
Scott D.
Domains for Denotational Semantics
Automata, Languages and Programming, Proc 10th Int. Colloq.
(ed. Nielsen M. & Schmidt E.M.)
Springer Verlag LNCS no 140, pp 577-613
1982
SCOT82b
Scott D.S.
Lectures on a Mathematical Theory of Computation
in BROY82a, pp 145-292
1982
SEIT85
Seitz C.L.
The Cosmic Cube
CACM Vol 28, no 1, January 1985
SERG82
Sergot M.
A Query-the-User Facility for Logic Programming
Proc. ECICS, Stresa, Italy, (eds. P. Degano & E. Sandwall)
pp 27-41, 1982
North Holland
SHAN85a *
Shanahan M.
The Execution of Logic Programs Considered as the Reduction of Set Expressions
Computer Laboratory, University of Cambridge
October 1985
SHAR85a
Sharp J.A.
Data Flow Computing
Ellis Horwood, March 1985
SHAP83a *
Shapiro E.Y.
A Subset Concurrent Prolog and its Interpreter, 2nd Version
ICOT Research center, technical Report TR-003
January 1983
SHAP83b *
Shapiro E.Y. & Takeuchi A.
Object Oriented Programming in Concurrent Prolog
( Also in New Generation Computing, Springer Verlag, Vol 1, No 1, 1983 )
ICOT Research Center, Technical Report TR-004
April 1983
SHAP83c *
Systems Progamming in Concurrent Prolog
( Also in "Proceedings of the 11th Annual ACM Symposium on Principles of
Programming Languages" )
ICOT Research Center, Technical Report TR-034
November 1983
SHAP84a *
Shapiro E. & Mierowsky C.
Fair, Biased, and Self-Balancing Merge Operators :
Their Specification and Implementation in Concurrent Prolog
CS84-07
Dept of Applied Mathematics, Weizmann Institute of Science, Israel
1984
SHAP84b *
Shapiro E.Y.
Alternation and the Computational Complexity of Logic Programs
CS84-06
Dept of Applied Mathematics, Weizmann Institute of Science, Israel
January 1984
SHAW85a *
Shaw D.E. & Sabety T.M.
The Multiple-Processor PPS Chip of the NON-VON 3 Supercomputer
Integration, the VLSI Journal, 3, pp 161-174
1985
SHEI83a *
Sheil B.
Family of Personal Lisp Machines Speeds AI Program Development
Electronics, November 3, 1983, pp 153-156
1983
SHIB82a
Shibayama S. & Kakuta T. & Miyazaki N. & Yokota H. & Murukami K.
A Relational Database Machine "Delta"
ICOT Research Center, Technical Memorandum TM-0002
November 1982
SHIB84a
Shibayama S. & Kakuta T. & Miyazaki N. & Yokota H. & Murakami K.
A Relational Database Machine with Large Semiconductor Disk and Hardware
Relational Algebra Processor
( Also in New Generation Computing, Vol 2, No 2, 1984 )
ICOT Research Center, Technical Report TR-053
March 1984
SHIB84b
Shibayama S. & Kakuta T. & Miyazaki N. & Yokota H. & Murukami K.
Query Processing Flow on RDBM Delta's Functionally - Distributed Architecture
ICOT Research Center, Technical Report TR-064
April 1984
SHIE85a *
Shields M.W.
Concurrent Machines
Computer Journal, Vol 28, no 5, pp 449-465
1985
SHIM83a
Shimizu H.
GP-PRO Graphic Display Control Library Written in Prolog
ICOT Research Center, Technical Memorandum TM-0025
August 1985
SHIP81a
Shipman D.W.
The functional data model and the data language DAPLEX
ACM TODS 6(1) p140-173 1981
SHOH85a
Shoham Y.
Ten Requirements for a Theory of Change
New Generation Computing, Vol 3, No 4, pp 467-477
1985
SICK82a
Sickel S.
Specification and Derivation of Programs
in BROY82a, pp 103-132
1982
SIVI85a *
Sivilotti M. & Emerling M. & Mead C.
A Novel Associative Memory Implemented Using Collective Computation
1985 Chapel Hill Conference on VLSI, pp 329-342
1985
SLEE80
Sleep M.R.
Applicative Languages, Dataflow and Pure Combinatory Code
Proc IEEE Compcon 80, pp 112-115
February 1980
SLEE82
Sleep M.R. & Holmstrom S.
A Short Concerning Lazy Reduction Rules of Append
Document, Computer Studies Centre, University of East Anglia,May 1982
SLEE83
Sleep M.R.
Novel Architectures
Distributed Computing- A Review for Industry, SERC, Manchester 1983
SLEE84
Sleep M.R. and Kennaway J.R.
The Zero Assignment Parallel Processor (ZAPP) Project
in DUCE84
1984
SLEE86a *
Sleep M.R.
Directions in Parallel Architecture
in BCS86a
1986
SLOM83a *
Sloman A. & Hardy S.
Poplog : A Multi-Purpose Multi-Language Program Development Environment
AISB Quarterly, vol 47, pp 26-34
1983
SMYT78a *
Smyth M.B.
Power Domains
Journal of Computer and System Sciences, Vol 16, pp 23-36
1978
SNYD79
Snyder A.
A Machine Architecture to Support an Object-Oriented Language
MIT Laboratory for Computer Science, MIT/LCS/TR-209, March 1979
SOLE85
Soley M.S.
Generic Software for Emulating Multiprocessor Architectures
Draft of MSc Thesis to be submitted May 1985
SPIV84
Spivey Mike
University of York Portable Prolog System Users Guide
University of York 1984
SRIN86a *
Srini V.P.
An Architectural Comparison of Dataflow Systems
IEEE Computer, March 1986, pp 68-88
1986
STAL85a *
Stallard R.P.
Occam - A Brief Introduction
Occam - The Loughborough Implmentation
Computer Studies Laboratory Report
Dept of Computer Studies, Loughborough University of Technology.
November 1985
STAM85a*
Stammers R.A.
Report to the Alvey Directorate on a Short Survey of The Industrial
Applications of Logic and Functional Programming in the United
Kingdom and United States
27 August 1985
STAP77a *
Staples J.
A Class of Replacement Systems With Simple Optimality Theory
Bull. Aust. Math. Soc., Vol 17, pp335-350
1977
STAP80a
Staples J.
Computation on Graph-Like Expressions
Th. Comp. Sci., Vol 10, pp 171-185
1980
STAP80b
Staples J.
Optimal Evaluations Of Graph-Like Expressions
Th. Comp. Sci., Vol 10, pp 297-316
1980
STAR84a *
Stark W.R.
A Glimpse Into The Paradise of Combinatory Algebra
International Journal of Computer and Information Sciences
Vol 13, No 3, pp 219-236
1984
STEE76
Steele G.L.Jr. & Sussman G.J.
LAMBDA: The Ultimate Imperative
AI Memo no 353
Artificial Intelligence Laboratory, MIT
1976
STEE77a
Steele G.L.Jr.
Compiler Optimization Based on Viewing LAMBDA as Rename Plus Goto
S.M. Thesis, MIT EE&CS, Cambridge.
Published as RABBIT: A Compiler for SCHEME (A Study in Compiler Optimization),
AI TR 474, MIT Lab, Cambridge
STEE77b
Steele G.L.Jr.
Debunking The 'Expensive Procedure Call' Myth
Proc. ACM National Conference, pp 153-162, 1962
Also revised as AI Memo 443, MIT Lab, Cambridge
STEE78
Steele G.L.Jr. & Sussman G.J.
The Art Of The Interpreter; or, The Modularity Complex
(parts zero,one and two)
AI Memo 453, MIT AI Lab, Cambridge, 1978
STEE79a
Steele G.L.Jr. & Sussman G.J.
Design of LISP-Based Processors; or, SCHEME: A Dielectric LISP; or,
Finite Memories Considered Harmful; or, LAMBDA The Ultimate Opcode
AI Memo 514, MIT AI Lab, Cambridge, 1979
Summarized in CACM 23 no 11, pp 629-645
STEE79b
Steele G.L.Jr. & Sussman G.J.
The Dream Of A Lifetime: A Lazy Scoping Mechanism
AI Memo 527, MIT Lab, Cambridge, 1979
STEP86a *
Stephenson B.K.
Computer Architectures for Image Processing
in BCS86a
1986
STIR85a *
Stirling C.
Modal Logics for Communicating Systems
Internal report, CSR-193-85
Department of Computer Science, University of Edinburgh
October 1985
STIR86a *
A Compositional Reformulation of Owicki-Gries's Partial Correctness Logic For
A Concurrent While Language
To appear in ICALP 1986
1986
STOY77a
Stoy J.E.
Denotational Semantics: The Scott-Strachey Approach to Programming Language
Theory
MIT Press, Cambridge Massachusetts
1977
STOY82a
Stoy J.
Some Mathematical Aspects Of Functional Programming
in DARL82a
1982
STOY82b
Stoy J.E.
Semantic Models
in BROY82a, pp 293-324
1982
STOY83a *
Stoye W.
The SKIM Microprogrammer's Guide
Computer Laboratory, University of Cambridge
Technical Report no 40
October 1983
STOYE84a *
Stoye W.
A New Scheme for Writing Functional Operating Systems
Computer Laboratory, University of Cambridge
Technical Report no 56
1984
STOYE84b *
Stoye W.R. & Clarke T.J.W. & Norman A.C.
Some Practical Methods for Rapid Combinator Reduction
Proceedings of 1984 ACM Symposium on Lisp and Functional Programming
Austin, Texas
pp 159-166
1984
SUGI83a
Sugiyama K. & Kameda M. & Akiyama K. & Makinouchi A.
A Knowledge Representation System in Prolog
ICOT Research Center, Technical Report TR-024
August 1983
SUGI84a
Sugimoto M. & Kato H. & Yoshida H.
Design Concept for a Software Development Consultation System
( Also in Second Japanese Swedish Workshop on Logic Programming and
Functional Programming, Uppsala, 1984 )
ICOT Research Center, Technical Report TR-071
August 1984
SUSS75a
Sussman G.J. & Steele G.L.Jr.
SCHEME: An Interpreter for Extended Lambda Calculus
AI Memo 349, MIT AI Lab, Cambridge, 1975
SUSS82a
Sussman G.J.
LISP, Programming and Implementation
in DARL82a
1982
SUZU82a
Suzuki N. & Kurihara K. & Tanaka H. & Moto-oka T.
Procedure Level Data Flow Processing on Dynamic Structure Multimicroprocessors
Journal of Information Processing Vol 5, No. 1 p11-16 March, 1982
SUZU82b
Suzuki N.
Experience with Specification and Verification of Hardware using PROLOG
Document, Presented at Working Conference on VLSI Engineering, Oct 1982
SYRE77a
Syre J.C. et al
Pipelining, Parallelism and Asynchronism in The LAU System
Proc. 1977 Int. Conf. on Parallel Processing, pp 87-92
August 1977
------------------------------
End of AIList Digest
********************
∂04-May-86 0219 LAWS@SRI-AI.ARPA AIList Digest V4 #112
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 May 86 02:19:02 PDT
Date: Sat 3 May 1986 22:23-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #112
To: AIList@SRI-AI
AIList Digest Sunday, 4 May 1986 Volume 4 : Issue 112
Today's Topics:
Bibliography - References #6
----------------------------------------------------------------------
Date: 9 Apr 86 13:24:10 GMT
From: allegra!mit-eddie!think!harvard!seismo!mcvax!ukc!dcl-cs!nott-cs!
abc@ucbvax.berkeley.edu (Andy Cheese)
Subject: Bibliography - References #6
TAKA84a
Takagi S. & Chikayama T. & Hattori T. & Tsuji J. & Yokoi T. & Uchida S. &
Kurokawa T. & Sakai K.
Overall Design of SIMPOS
( Also in "Proceedings of 2nd Int'l Conference of Logic Programming", Uppsala,
1984 )
ICOT Research Center, Technical Report TR-057
April 1984
TAKE82a
Takeuchi A. & Shapiro E.Y.
Object Oriented Programming in Relational Language
ICOT Document
TAKE82b
Takeuchi A.
Let's Talk Concurrent Prolog
ICOT Research Center, Technical Memorandum TM-0003
December 1982
TAKE83a *
Interprocess Communication in Concurrent Prolog
( Also in "Proceedings of Logic Programming Workshop '83", Portugal )
ICOT Research Center, technical Report TR-006
May 1983
TAKI84a
Hardware Design and Implementation of the Personal Sequential Inference Machine
(PSI)
( Also in "Proceedings of FGCS 84", Tokyo, 1984 )
ICOT Research Center, Technical Report TR-075
August 1984
TAMA83a *
Tamaki M.
A Transformation System for Logic Programs Which Preserves Equivalence
ICOT research Center, Technical report TR-018
August 1983
TANA82a
Tanaka J. & Keller R.M.
Code Optimisation in a Functional Language
In Workshop on Functional Programming, Japan Inf. Processing Soc.
December 1982
TANI81a *
Tanimoto S.L.
Towards Hierarchical Cellular Logic: Design Considerations for Pyramid
Machines
Dept of Comp Sci, Univ of Washington, Technical Report #81-02-01
February 1981
TARJ72
Tarjan R.
Depth-First Search & Linear Graph Algorithms
SIAM Journal of Computing Vol 1 Part 2 p146-60 1972
TARN77a *
Tarnlund S.-A.
Horn Clause Computability
BIT 17, 1977, pp 215-226
1977
THOM85a *
Thompson S.J.
Laws in Miranda
University of Kent Computing Laboratory Report No 35
December 1985
TIB84
ed. Tiberghien J.
New Computer Architectures
International Series in Computer Science
Academic Press
1984
TICK84
Tick E. & Warren D.H.D.
Towards a Pipelined Prolog Processor
Proc. 1984 Int. Symp. on Logic Programming
pp 29-40
1984
TILL85a *
Tillotson M.
Introduction to the Functional Programming Language "Ponder"
Computer Laboratory, University of Cambridge, Tech Rep no 65
1985
TOGG86a *
Toaggi M. & Watanabe H.
An Inference Engine For Real-Time Fuzzy Control: VLSI Design and Implementation
To appear in Proc. of Japan-USA Symp. on flexible Automation, July 14-15, 1986,
Osaka, Japan
1986
TREL78
Treleaven P.C.
Principle Components of Data Flow Computer
Proc. 1978 Euromicro Symp. , pp 366-374
October 1978
TREL80a
Treleaven P.C. & Mole G.F.
A Multi-Processor Reduction Machine For User-Defined Reduction Languages
Proc. 7th Int. Symp. on Comp. Arch., pp 121-129
April 1980
TREL80b
ed. Treleaven P.C.
VLSI: Macine Architecture and Very High Level Languages
Proc of the joint SRC/Univ of Newcastle upon Tyne Workshop,
Computing Laboratory, Univ. of Newcastle Upon Tyne,
Tech Rep 156
December 1980
TREL81a
Treleaven P.C. & Hopkins R.P.
Decentralised Computation
Proc 8th Int Symp on Comp Arch, pp 279-290
May 1981
TREL81b
Treleaven P.C. & Hopkins R.P.
A Recursive (VLSI) Computer Architecture
Computing Laboratory, Univ of Newcastle Upon Tyne
Tech Rep 161
March 1981
TREL81
Treleaven P.C. et al
Data Driven and Demand Driven Computer Architecture
Computer Lab, Univ of Newcastle Upon Tyne
Tech Rep 168,
July 1981
TREL82a
Treleaven P.C.
Computer Architecture For Functional Programming
in DARL82a
1982
TREL82b
Treleaven P.C. Brownbridge D.R. & Hopkins R.P.
Data Driven and Demand Driven Computer Architecture
ACM Computing Surveys Vol 14 No. 1 Jan 1982
TSUJ84a
Tsuji J. & Kurokawa T. & Tojyo S. & Iima Y. & Nakazawa O. & Enomoto S.
Dialog Management in the Personal Sequential Inference Machine (PSI)
( Also in "Proceedings of ACM 84", San Francisco, 1984 )
ICOT Research Center, Technical report TR-046
March 1984
TURN76
Turner D.A.
SASL Language Manual
CS/79/3 Dept. of Computational Science, University of St. Andrews ,1976
(CS/75/1)
TURN79a
Turner D.A.
A New Implementation Technique for Applicative Languages
Software Practice & Experience Vol 9 p31-49 ,1979
TURN79b
Another Algorithm for Bracket Abstraction
Journal of Symbolic Logic, Vol 44, no. 2,
June 1979
TURN80
Turner D.A.
Programming Languages- Current and Future Developments
Infotech State of the Art Conference on Software Development Techniques
1980
TURN81a
Turner D.A.
The Semantic Elegance of Applicative Languages
Proc. 1981 ACM Conf on Functional Programming Languages & Computer
Architecture p85-92
TURN81b
Turner D.A.
Aspects of the Implementaion of Programming Languages
D.Phil Thesis, Oxford University
1981
TURN82a
Turner D.A.
Recursion Equations As A Programming Language
in DARL82a
1982
TURN82b
Turner D.A.
Functional Programming and Proofs of Program Correctness
In "Tools and Notions For Program Correctness"
(ed. D. Neel), pp 187-209
Cambridge University Press
1982
TURN85a
Turner D.A.
Functional Programs as Executable Specifications
in HOA85a
1985
TURN85b *
Turner R. & Lowden B.G.T.
An Introduction to the Formal Specification of Relational Query Languages
Computer Journal, vol 28, no 2, pp 162-169
1985
UCHI82a
Uchida S. & Yokota M. & Yamamoto A. & Taki K. & Nishikawa H. &
Chikayama T. & Hattori T.
The Personal Sequential Inference Machine, Outline Its Architecture and
Hardware System
ICOT Research Center, Technical Memorandum TM-0001
November 1982
UCHI82b
Uchida S.
Towards A New Generation Computer Architecture
( Also in "VLSI Architecture", Prentice Hall, 1984 )
ICOT research center Technical Report TR-001
July 1982
UCHI83a
Uchida S.
Inference Machine: From Sequential to Parallel
( Also in "Proceedings of 10th International Symposium on Computer
Architecture", Sweden, 1983, IEEE Computer Society Press )
ICOT Research Center, Technical report TR-011
may 1983
UCHI83b
Uchida S. & Yokota M. & Yamamoto A. & Taki K. & Nishikawa H.
Outline of the Personal Sequential Inference Machine:PSI
( Also in New Generation Computing, Vol 1, No 1, 1983 )
ICOT Research Center, Technical Memorandum TM-0005
April 1983
UEDA84a
Ueda K. & Chikayama T.
Efficient Stream/Array Processing in Logic Programming LAnguage
( Also in "Proceedings of FGCS 84", Tokyo, 1984 )
ICOT Research Center, Technical Report TR-065
April 1984
ULLM85a *
Ullmann J.R. & Haralick R.M. & Shapiro L.G.
Computer Architecture for Solving Consistent Labelling Problems
Computer Journal, Vol 28, no 2, pp 105-111
1985
UMEY84a
Umeyama S. & Tamura K.
Parallel Execution of Logic Programs
Electrotechnical Lab., MITI Ibakaraki, Japan
UNGA82
Ungar D.M. & Patterson D.A.
Berkeley Smalltalk: Who Knows Where the Time Goes ?
In Smalltalk-80, Bits of History, Words of Advice, Glenn Krasner
1982
UNGA84
Ungar D. & Blau R. & Foley P. & Samples D. & Patterson D.A.
Architecture of SOAR: Smalltalk on a RISC
11th Symp. on Comp. Arch., Ann Arbor
June 1984
VALI85
Valiant L.G.
Deductive Learning
in HOA85a
1985
VANE76a *
Van Emden M.
Verification Conditions As Programs
Proceedings 3rd International Colloquium on Automata Languages and Programming
pp 99-119
Edinburgh University Press, 1976
VASS85a *
Access to Specific Declarative Knowledge by Expert Systems : The Impact of
Logic Programming
Decision Support Systems 1, pp 123-141
April 1985
VEGD84
Vegdahl S.R.
A Survey of Proposed Architectures for the Execution of Functional Languages
IEEE TOC C-33 No12, Dec 1984, p1050-1071
VUIL74a *
Vuillemin J.
Correct and Optimal Implementation of Recursion In A Simple Programming
Language
J. Comp. Sys., 9, no 3, pp 332-354
1974
WADG85
Wadge W.W. & Ashcroft E.A.
Lucid, The Dataflow Programming Language
Apic Studies in Data Processing no. 22
Academic Press, 1985
WADL76a
Wadler P.L.
Analysis of an Algorithm for Real Time Garbage Collection
Comm ACM Vol 19 No 9 p491-500 Sept 1976
WADL84a *
Wadler P.
Listlessness is Better Than Laziness: Lazy Evaluation and Garbage Collection
at Compile-Time
Proceedings ACM Symposium on LISP and Functional Programming, Austin, Texas
August 1984
WADL84b *
Wadler P.
Listlessness is Better Than Laziness
PhD Dissertation, Carnegie-Mellon University
August 1984
WADL85a *
Wadler P.
A Splitting Headache : Strict vs Lazy Semantics for Pattern Matching in
Lazy Languages
Oxford University, Computing Laboratory
January 1985
Addenda November 1985
WADL85b *
Wadler P.
An Introduction to Orwell (DRAFT)
Oxford University, Computing Laboratory
1 April 1985
revised December 1985
WADL85c
Wadler P.
Listlessness is Better Than Laziness II: Composing Listless Functions
Workshop on Programs as Data Objects, Copenhagen
October 1985
( To be published as LNCS by Springer-Verlag )
WADL86a *
Plumbers and dustmen: Fixing a space leak with a garbage collector
posted to fp@uea.sp
1986
WADS71a
Wadsworth C.P.
Semantics and Pragmatics of The Lambda Calculus
D.Phil Thesis, Univ. of Oxford
1971
WADS84a *
Wadsworth C.P.
Report on the IOTA Programming System and other Japanese Advanced Research
Rutherford Appleton Laboratory, RAL-84-090, August 1984
WARR77a
Warren D.H.D. & Pereira L.M. Pereira F.
PROLOG-The Language and its Implementation Compared to LISP
Proc. Symp. on AI and Programming Languages, 1977
Sigplan 8(12) or Sigart 64 pp 109-115
WARR77b
Warren D.H.D.
Applied Logic - Its Use and Implementation as a Programming Tool
Phd Dissertation
Dept of AI, Univ of Edinburgh
1977
WARR82a*
Warren D.H.D.
Higher Order Extensions to PROLOG: Are They Needed ?
in Machine Intelligence 10
(eds Hayes J.E. & Michie D. & Pao Y-H )
pp 441-454
Ellis Horwood Ltd
1982
WARR83a *
Warren D.H.D.
An Abstract Prolog Instruction set
Technical Note 309, SRI International
31 August 1983
WATP84
Watson P.
A Functional Language Computer
Conversion Report Univ. of Manchester Sept 1984
WATP85a *
Watson P.
A Reference Count Garbage Collection Scheme For Distributed Computers
Draft Document, Dept of Computer Science, Univ. of Manchester, March 1985
WATP85b *
Watson P.
Report on Visit to the U.S.A.
Document, Dept of Computer Science, Univ of Manchester, April 1985
WATP85c *
Watson P.
Higher Order Functions in EFL
Document, Dept of Computer science, Univ. of Manchester, 24 May 1985
WATS79
Watson I. & Gurd J.
A Prototype Data Flow Computer With Token Labeling
Proc. Nat. Comp. Conf., Vol 48, pp 623-628
1979
WATS83a *
Watson I.
Functional Logic Programming
Document, Dept of Computer Science, Univ. of Manchester, April 1983
WATS84a *
Watson I. (& Ashcroft A.)
A Demand Driven Dataflow Machine/Tagged Data-Driven Reduction Machine
Document, Dept of Computer Science, Univ. of Manchester ,March 1984
WATS84b *
Watson I.
Another Model (And Machine)
Document, Dept of Computer Science, Univ. of Manchester ,May 1983
WATS84c *
Watson I.
Higher Order Functions
Document, Dept of Computer Science, Univ. of Manchester ,Aug 1984
WATS85a *
Watson I.
A Parallel SKI(BC) Combinators Model
Document, PMP/MU/IW/00005, Dept of Computer Science, Univ. of Manchester,
March 1985
WATS85b *
Watson Ian & Watson Paul & Woods Viv
Parallel Data-Driven Graph Reduction
Document, Dept. of Computer Science, Univ. of Manchester
WEIH85a *
Weihrauch K.
Type 2 Recursion Theory
Theoretical Computer Science 38, pp 17-33
May 1985
WHIT80a
White J.L.
Address/Memory Management for a Gigantic LISP Environment or, GC
Considered Harmful
Proc. 1980 LISP Conf. p119-127
WHITE78
Whitelock P.J.
A Conventional Language for Data Flow Computing
MSc Dissertation, Dept of Comp Sci, Univ. of Manchester, October 1978
WILL80
Williams J.H.
On The Development Of The Algebra Of Functional Programs
Report No RJ2983, IBM Research Laboratory, San Jose, California, October 1980
WILL81
Williams J.H.
Formal Representations For Recursively Defined Functional Programs
in "Formalization Of Programmming Concepts",
Lecture Notes in Computer Science, no 107, Springer Verlag, April 1981
WILL82
Williams J.H.
Notes on The FP Style Of Functional Programming
in DARL82a
1982
WILN80
Wilner W.
Recursive Machines
Xerox Parc Internal Report
1980
WINS84
Winston P. & Horn K.P.
Lisp
Second Edition
Addison Wesley Publishing Company
1984
WINS? *
Winskel G.
Categories of Models for Concurrency
Computer Laboratory, University of Cambridge
Technical Report no 58
WINT80
WinterStein G. & Dausmann M. & Persch G.
Deriving Different Unification Algorithms From a Specification in Logic
Proc. of Logic Programming workshop, Debrecen, Hungary
(ed S. -A. Tarnlund), pp 274-285
1980
WIRS82a
Wirsing M. & Broy M.
An Analysis of Semantic Models For Algebraic Specifications
in BROY82a, pp 351-412
1982
WISE79a
Wise D.S.
Morris's Garbage Compaction Algorithm Restores Reference Counts
ACM Trans. on Programming Languages and Systems, 1, no 1, pp 115-122
1979
WISE82a
Wise D.S.
Interpreters For Functional Programming
in DARL82a
1982
WORL85a
Worley J. & Arabe J. & Tu K.G.
The Architecture and Design of the Functional Programming Machine
Document, Computer Sci. Dept. , Univ. of California, Los Angeles
YAO82
Yao S.B. Waddle V.E. & Housel B.C.
View Modeling and Integration Using the Functional Data Model
IEEE TOSE Vol SE-8 No.6 p544-553 ,Nov 1982
YASU83a *
Yasukawa H.
LFG in Prolog - Toward A Formal System for Representing Grammatical
Relations
ICOT Research Center, technical report TR-019
August 1983
YASU83b *
Yasuura H.
On The Parallel Computational Complexity of Unification
ICOT Research Center, Technical report TR-027
October 1983
YOKOI83a
Yokoi T.
A Perspective of the Japanese FGCS Project
( Presented to IJCAI, F.R.G., 1983 )
ICOT Research Center, Technical Memorandum TM-0026
September 1983
YOKOM84a *
Yokomori T.
A Note on the Set Abstraction in Logic Programming Language
( Also in "Proceedings of FGCS 84", Tokyo, 1984 )
ICOT Research Center, Technical Report TR-060
April 1984
YOKOT83a
Yokota H. & Kunifuji S. & Kakuta T. & Miyazaki N. & Shibayama S. & Murakami K.
An Enhanced Inference Mechanism for Generating Relational Algebra Queries
( Also in "Proceedings of Third ACM SIGACT-SIGMOD Symp. on Principles of
Database Systems", Waterloo, Canada, 1984 )
ICOT Research Center, Technical Report TR-026
October 1983
YOKOT84a
Yokota M. & Yamamoto A. & Taki K. & Nishikawa H. & Uchida S.
The Design and Implementation of a Personal Sequential Inference Machine: PSI
( Also in New Generation Computing, Vol 1, No 2, 1984 )
ICOT Research Center, Technical Report TR-045
February 1984
------------------------------
End of AIList Digest
********************
∂05-May-86 0008 LAWS@SRI-AI.ARPA AIList Digest V4 #113
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 5 May 86 00:08:30 PDT
Date: Sun 4 May 1986 21:25-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #113
To: AIList@SRI-AI
AIList Digest Monday, 5 May 1986 Volume 4 : Issue 113
Today's Topics:
Seminars - Use of AI in Project Management and Scheduling (Ames) &
Procedural Abstraction in Soar (UCB) &
Artificial Organisms (CMU) &
Multisensor Robot Systems (MIT),
Conference - 3rd. Int. Logic Programming
----------------------------------------------------------------------
Date: Thu, 1 May 86 09:50:21 pdt
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Seminar - Use of AI in Project Management and Scheduling (Ames)
National Aeronautics and Space Administration
Ames Research Center
AMES AI FORUM
SEMINAR ANNOUNCEMENT
The Use of AI in Project Management and Scheduling
John C. Kunz, Ph.D
IntelliCorp Corporate Offices
Tuesday, May 13, 1986
10:00 - 11:00 am (NOTE TIME)
N245 Space Sciences Auditorium
NASA Ames Research Center
The discipline of Project Management can potentially contribute both to
the planning and the control of large projects. Knowledge-based systems
can be used to help project and operations managers to identify the
problems they must solve and to consider various alternative approaches
to their problems. This talk will discuss results of using some
prototype systems that support project management and scheduling. The
discussion will consider both some of the management issues and design
of the AI analysis systems.
point of contact: Alison Andrews (415)694-6741
mer.andrews@ames-vmsb.ARPA
VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18. Do not
use the Navy Main Gate.
Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance. Submit requests to the point of
contact indicated above. Non-citizens must register at the Visitor
Reception Building. Permanent Residents are required to show Alien
Registration Card at the time of registration.
------------------------------
Date: Thu, 1 May 86 04:44:11 PDT
From: admin%cogsci@berkeley.edu (Cognitive Science Program)
Subject: Seminar - Procedural Abstraction in Soar (UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Spring 1986
Cognitive Science Seminar - IDS 237B
Tuesday, May 6, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
3105 Tolman (Beach Room)
``Procedural Abstraction in the Soar Cognitive Architecture''
Paul S. Rosenbloom
Departments of Computer Science and Psychology,
Stanford University
The Soar project is an attempt to build a system capable
of general intelligent behavior -- a cognitive architecture.
It is to be capable of working on the full range of tasks, from
highly routine to extremely difficult open-ended problems;
capable of employing the full range of problem solving methods
and representations required for these tasks; and capable of
learning about all aspects of the tasks and its performance on
them. In this talk I will present an overview of the current
system, which is an approximation to this ideal, and some new
results on the integration of abstraction planning capabilities
into R1-Soar -- the implementation in Soar of an expert system
for computer configuration. Abstraction planning in R1-Soar is
based on the partial execution of procedurally encoded opera-
tors and on Soar's general problem solving and learning capa-
bilities.
------------------------------
Date: 1 May 86 15:56:44 EDT
From: Gregory.Hood@ML.RI.CMU.EDU
Subject: Seminar - Artificial Organisms (CMU)
I will be presenting the last thesis proposal of this semester on Friday,
May 9, at 10:30am (yes, that's Black Friday) in Wean 5409. A copy of the
proposal is in the lounge; I have additional copies available in my office
(8126) for anyone who wants one.
Title: Artificial Organisms: A Neural Modeling Approach
Abstract:
The proposed thesis will investigate autonomous goal-based learning at the
neural modeling level. To support this study, a series of artificial
organisms will be developed within the context of the World Modeling System,
which is a realistic simulated environment. Each organism will be
controlled by an artificially designed nervous system based on
organizational principles found in simple natural organisms such as the
marine snail. The organisms will exhibit several simple forms of learning
such as habituation, sensitization, classical conditioning, and operant
conditioning. Emphasis will be placed on the development of robust
organisms which are capable of prolonged existence within the environment
rather than isolated neural networks which are only capable of single
one-shot learning tasks. It is expected that insights into the relationship
of machine learning to learning in natural organisms will emerge from the
study of these artificial organisms.
------------------------------
Date: 2 May 1986 15:19 EDT (Fri)
From: Claudia Smith <CLAUDIA%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Seminar - Multisensor Robot Systems (MIT)
INTEGRATION AND COORDINATION OF
MULTI-SENSOR ROBOT SYSTEMS
Hugh F. Durrant-Whyte
University of Pennsylvania
Philadelphia
A multi-sensor robot system comprises many diverse non-homogeneous sources
of information. The sensors of these systems take a variety of disparate
observations of features in the robot environment. The measurements supplied
by the sensor systems are uncertain, partial, occasionally spurious or
incorrect and often incomparable with other sensors views. It is the goal of
the robot system to coordinate and integrate sensor observations into a
consensus best view of the environment which can be used to plan and guide
the execution of tasks.
We will present a methodology for the integration of uncertain sensor
information and the coordination of multi-sensor observation strategies. We
first build a probabilistic model of a sensor's information gathering
characteristics and show how it's views can be extracted from a Gaussian
(stochastic) environment. A methodology is developed for the consistent
integration of disparate sensor views in to a consensus world model. We show
how the information model can be used to obtain maximal-information sensing
strategies and to coordinate the actions of multiple sensor agents. We
demonstrate the utility of these techniques by application to a
robot-mounted tactile array and an active stereo vision system.
Date: Wednesday, May 7th
Time: 4pm
Place: NE43-8th floor playroom
------------------------------
Date: 23 Apr 86 14:55:07 GMT
From: ucdavis!lll-lcc!lll-crg!caip!seismo!mcvax!ukc!icdoc!csa@ucbvax.
berkeley.edu (Cheryl S Anderson)
Subject: Conference - 3rd. Int. Logic Programming
THIRD INTERNATIONAL CONFERENCE ON LOGIC PROGRAMMING
July 14-18, 1986
FINAL PROGRAM
Monday, July 14
All Day Tutorial: Logic programming and its applications by Robert
Kowalski and Frank Kriwaczek.
Half Day Tutorials:
A.M. Prolog implementation and architecture. David Warren
or Techniques for natural language processing in Prolog. Michael McCord
P.M. Parallel logic programming. Keith Clark and Steve Gregory
or Japanese Fifth Generation Applications Research. Koichi Furukawa
Tuesday, July 15
KEYNOTE ADDRESS: K. Fuchi, ICOT
1a. Parallel implementations
An abstract machine for restricted AND-parallel execution of logic
programs.
Manuel V. Hermenegildo, University of Texas at Austin.
Efficient management of backtracking in AND-Parallelism.
Manuel V. Hermenegildo, University of Texas at Austin & Roger I. Nasr, MCC.
An intelligent backtracking algorithm for parallel execution of logic
programs.
Vipin Kumar, University of Texas at Austin.
Delta Prolog: a distributed backtracking extension with events.
Luis Moniz Pereira, Luis Monteiro, Jose Cunha & Joaquim N. Aparicio,
Universidade Nova de Lisboa.
1b. Theory and complexity
OLD resolution with tabulation.
Hasao Tamaki, Ibaraki University.
Logic programs and alternation.
P. Stepanek & O. Stepankova, MFF Prague.
Intractable unifiability problems and backtracking.
D.A. Wolfram, Syracuse University.
On the complexity of unification sequences.
Heikki Mannila & Esko Ukkonen, University of Helsinki.
2a. Implementations and architectures
How to invent a Prolog machine.
Peter Kursawe, GMD & University of Karlsruhe.
A sequential implementation of Parlog.
Ian Foster, Steve Gregory, Graem Ringwood, Imperial College & Ken Satoh,
Fujitsu Limited.
A GHC abstract machine and instruction set.
Jacob Levy, Weizmann Institute.
A Prolog processor based on a pattern matching memory device.
Ian Robinson, Schlumberger Palo Alto Research.
2b. Inductive inference and debugging
An improved version of Shapiro's model inference system.
Matthew Huntbach, University of Sussex.
A framework for ICAI systems based on inductive inference and logic
programming.
Kazuhisa Kawai, Riichiro Mizoguchi, Osamu Kakusho & Jun'ichi Toyoda, Osaka
University.
Rational debugging in logic programming.
Luis Moniz Pereira, Universidade Nova de Lisboa.
Using definite clauses and integrity constraints as the basis for a theory
formation approach to diagnostic reasoning.
Randy Goebel, University of Waterloo, Koichi Furukawa, ICOT & David Poole,
University of Waterloo.
INVITED TALK: Theory of logic programming. Jean-Luis Lassez, IBM
Wednesday, July 16
INVITED TALK: Concurrent logic programming languages. Akikazu Takeuchi
ICOT.
3a. Concurrent logic languages
P-Prolog: a parallel language based on exclusive relation.
Rong Yang & Hideo Aiso, Keio University.
Making exhaustive search programs deterministic.
Kazunori Ueda, ICOT.
Compiling OR-parallelism into AND-parallelism.
Michael Codish & Ehud Shapiro, Weizmann Institute.
A framework for the implementation of Or-parallel languages.
Jacob Levy, Weizmann Institute.
3b. Theory and semantics
Logic program semantics for programming with equations.
Joxan Jaffar & Peter J. Stuckey, Monash University.
On the semantics of logic programmming languages.
Alberto Martelli & Gianfranco Rossi, Universita di Torino.
Towards a formal semantics for concurrent logic programming languages.
Lennart Beckmann, Uppsala University.
Thursday, July 17
INVITED TALK: Logic programming and natural language processing. Michael
McCord, IBM.
4a. Parallel applications and implementations
Parallel logic programming for numeric applications.
Ralph Butler, Ewing Lusk, William McCune & Ross Overbeek, Argonne National
Laboratory.
Deterministic logic grammars.
Harvey Abramson, University of British Columbia.
A parallel parsing system for natural language analysis.
Yuji Matsumoto, ICOT.
4b. Theory and higher-order functions
Equivalence of logic programs.
Michael J. Maher, University of Melbourne.
Qualified answers and their application to transformation.
Phil Vasey, Imperial College.
Procedures in Horn-clause programming.
M.A. Nait Abdallah, University of W. Ontario.
Higher-order logic programming.
Dale A. Miller & Gopalan Nadathur, University of Pennsylvania.
5a. Program analysis
Abstract interpretation of Prolog programs.
C.S. Mellish, University of Sussex.
Verification of Prolog programs using an extension of execution.
Tadashi Kanamori, Mitsubishi Electric Corporation & Hirohisa Seki, ICOT.
Detection and optimization of functional computations in Prolog.
Saumya K. Debray & David S. Warren, SUNY at Stony Brook.
Control of logic program execution based on the functional relations.
Katsuhiko Nakamura, Tokyo Denki University.
5b. Applications and teaching
Declarative graphics.
A. Richard Helm & Kim Marriott, University of Melbourne.
Test-pattern generation for VLSI circuits in a Prolog environment.
Rajiv Gupta, SUNY at Stony Brook.
Using Prolog to represent and reason about protein structure.
C.J. Rawlings, W.R. Taylor, J. Nyakairu, J. Fox & M.J.E. Sternberg,
Imperial Cancer Research Fund & Birkbeck College.
A New approach for introducing Prolog to naive users.
Oded Maler, Zahava Scherz & Ehud Shapiro, Weizmann Institute.
INVITED TALK: Prolog programming environments. Takashi Chikayama, ICOT.
Friday, July 18
INVITED TALK: Logic programming and databases. Jeffery D. Ullman,
Stanford University.
6a. Implementations and databases
A superimposed codeword indexing scheme for very large Prolog databases.
Kotagiri Ramamohanarao & John Shepherd, University of Melbourne.
Interfacing Prolog to a persistent data store.
D.S. Moffat & P.M.D. Gray, University of Aberdeen
General model for implementing DIF and FREEZE.
P. Boizumault, CNRS.
Cyclic tree traversal.
Martin Nilsson & Hidehiko Tanaka, University of Tokyo.
6b. Theory and negation
Completeness of the SLDNF-resolution for a class of logic programs.
R. Barbuti, Universita di Pisa.
Choices in, and limitations of, logic programming.
Paul J. Voda, University of British Columbia.
Negation and quantifiers in NU-Prolog.
Lee Naish, University of Melbourne.
Gracefully adding negation and disjunction to Prolog.
David L. Poole & Randy Goebel, University of Waterloo.
7a. Compilation
Memory performance of Lisp and Prolog programs.
Evan Tick, Stanford University.
The design and implementation of a high-speed incremental portable Prolog
compiler.
Kenneth A. Bowen, Kevin A. Buettner, Ilyas Cicekli & Andrew Turk, Syracuse
University.
Compiler optimizations for the WAM.
Andrew K. Turk, Syracuse University.
Fast decompiling of compiled Prolog clauses.
Kevin A. Buettner, Syracuse University.
7b. Models of computation and implementation
Logic continuations.
Christopher T. Haynes, Indiana University.
Cut & Paste - defining the impure primitives of Prolog.
Chris Moss, Imperial College.
Tokio: logic programming language based on temporal logic and its
compilation to Prolog.
M. Fujita, Fujitsu Labs. Ltd., S. Kono, H. Tanaka & Moto-oka, University of
Tokyo.
The OR-woods description of the execution of logic programs.
Sun Chengzheng & Tzu Yungui, Changsha Institute.
PANEL DISCUSSION: Programming vs. uncovering parallelism. Chair: Keith
Clark, Imperial College.
GENERAL INFORMATION
TIME AND VENUE
Monday 14th to Friday 18th July. Imperial College of Science and
Technology, South Kensington. Sherfield Building - Great Hall, Pippard and
Read Lecture Theatres.
Registration: Tutorials from 8.00 a.m. on Monday and Full Conference from
2.00 p.m. to 8.00 p.m. on Monday and from 8.15 p.m. on Tuesday, in the main
reception area adjacent to the Great Hall.
General information on facilities and entertainment in London will be
available from the main reception desk.
CONFERENCE SESSIONS
The main conference runs from 9.30 a.m. on Tuesday, 15th July until 5.00
p.m. on Friday, 18th July. Technical sessions are divided into two
parallel streams and each paper lasts for approximately 20 minutes. (Each
day has plenary sessions addressed by invited speakers). Morning breaks
are from 10.30-10.50, lunch breaks from 12.30-2.00, and afternoon breaks
from 3.40-4.00.
TUTORIALS
The Tutorial Programme takes place on Monday, 18th July, from 9.30 a.m.
Each tutorial session is priced separately.
COMMERCIAL EXHIBITION
There will be a commercial exhibition located in the Junior Common Room on
the same level as the main conference facilities in the Sherfield Building
from 1.00 p.m. on Monday until Thursday lunchtime. Companies taking part
in the exhibition include software developers, hardware manufacturers and
publishers. A reception will be held in the exhibition area at the end of
the tutorial sessions on Monday. Refreshments will also be available in
the exhibition area during session breaks. Anyone interested in taking
space at the exhibition should contact the Conference Organizers at
Imperial College Tel. 01-589 5111 ext. 5011.
[If you've read this far, you may want to write to me for the social
programme, housing data, and registration forms that were included
in the original message. -- KIL]
------------------------------
End of AIList Digest
********************
∂05-May-86 0226 LAWS@SRI-AI.ARPA AIList Digest V4 #114
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 5 May 86 02:26:30 PDT
Date: Sun 4 May 1986 21:32-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #114
To: AIList@SRI-AI
AIList Digest Monday, 5 May 1986 Volume 4 : Issue 114
Today's Topics:
Queries - Stella & Expert Systems for PCs & Boltzman Machine,
AI Tools - Neural Networks & The Connection Machine & String Reduction,
Linguistics - Italo Calvino AI Project
----------------------------------------------------------------------
Date: Fri, 2 May 86 11:12 EST
From: "Steven H. Gutfreund" <GUTFREUND%umass-cs.csnet@CSNET-RELAY.ARPA>
Subject: Stella
Does anyone have any information on Stella (personal or journal articles)?
I understand it is a program for the MacIntosh done at Dartmouth in the
spirit of Rocky Boots.
------------------------------
Date: 2 May 86 13:07 PDT
From: Stern.pasa@Xerox.COM
Subject: Expert Systems for PCs
In answer to a recent query for "expert systems on PCs" someone supplied
a list of software. It seemed to me all the entries were for AI tools,
and none of them were expert systems. Is this what was desired?
Josh
------------------------------
Date: 1 May 86 19:44:52 GMT
From: ihnp4!houxm!mtuxo!orsay@ucbvax.berkeley.edu (j.ratsaby)
Subject: Boltzman Machine
I am interested to know who did/does research about neural networks
specifically those that are based on stochastic theory.
I would like to the status of these researches.
thanks in advance
joel Ratsaby
AT&T I.S.L
Middletown N.J
(201)957-2649
:wq
------------------------------
Date: 1 May 86 18:26:26 GMT
From: ulysses!gamma!pyuxww!pyuxv!sr@ucbvax.berkeley.edu (S Radtke)
Subject: Re: neural networks
In article <837@mhuxt.UUCP> js2j@mhuxt.UUCP (sonntag) writes:
>A recent issue of 'Science' had an article on 'neural networks', which,
>apparently consist of ...
etc.
To set the facts straight.
The name of the mag is Science 86 which is published by AAAS and is not
to be confused with the journal Science, also published by AAAS.
>They said incredibly little about the actual details of
>how each node operates, unfortunately.
Probably because its intended audience is rather broad - intelligent
people with no particular expertise or training assumed.
Kind of a Readers Digest for Yuppies with a high-tech inclinations.
> So how about it? Has anybody else heard of these things? Is this
>really a way of going about AI in a way which *may* be similar to what
>brains do? Just exactly what algorithms are the nodes implementing, and
>how do you provide input and get output from them? Does anyone know
>where I could get more information about them?
You might try turning to the back of the magazine, to a section listing
articles for further, deeper reading.
Or you can look in today's paper (if you happen to read the NY Times) and
check the article on page D2 which announces the commercial availability
of the Connection Machine from a start-up concern in Cambridge.
Probably next week there will be ads on CBS during the evening news.
Steve Radtke
bellcore!u1100a!sr
Bell Communications Research
Piscataway, NJ
------------------------------
Date: 3 May 86 01:29:02 GMT
From: dali.berkeley.edu!regier@ucbvax.berkeley.edu (Terrance P. Regier)
Subject: Re: neural networks
In article <175@sdics.UUCP> cottrell@sdics.UUCP (Gary Cottrell) writes:
>
>Hopfield is the one who did the traveling salesman problem. I'm not sure
>where he is, tho.
>
J.J Hopfield is at the: Division of Chemistry and Biology
California Institute of Technology
Pasadena, CA 91125
-- Terry
------------------------------
Date: 1 May 86 15:54:19 GMT
From: ucdavis!lll-lcc!lll-crg!caip!seismo!harvard!cmcl2!lanl!crs@ucbvax
.berkeley.edu (Charlie Sorsby)
Subject: Re: neural networks
> A recent issue of 'Science' had an article on 'neural networks', which,
> .
> .
> .
In a related vein, the 7 April, 1986 issue of Electronic Engineering Times
(an electronics engineering newspaper) featured the following articles in
the Computer Engineering section:
Hopfield's Nerve Nets Realize Biocomputing
Neural Chips Emulate Brain Functions
Brain-Emulating Circuits Need `Sleep' and `Dreams'
Several other issues of this weekly paper have, over the past month or so,
carried one or more related articles.
--
The opinions expressed are not necessarily those of my employer,
the government or your favorite deity.
Charlie Sorsby
...!{cmcl2,ihnp4,...}!lanl!crs
crs@lanl.arpa
------------------------------
Date: 2 May 86 15:48:10 GMT
From: ucdavis!lll-lcc!lll-crg!seismo!rochester!goddard@ucbvax.berkeley.edu
(Nigel Goddard)
Subject: Re: neural networks
Departments working in this area include, amongst others:
C.S. University of Rochester
Carnegie Mellon
Cog Sci University of California, San Diego
?? University of Massachussets, Amherst
There is a technical report "Rochester Connectionist Papers" available
here which probably references a lot of other work as well.
Nigel Goddard
------------------------------
Date: 4 May 86 04:16:08 GMT
From: ucdavis!lll-lcc!lll-crg!topaz!harvard!bu-cs!jam@ucbvax.berkeley.edu
(Jonathan A. Marshall)
Subject: Re: neural networks
Stephen Grossberg has been publishing on neural networks for 20 years.
He pays special attention to designing adaptive neural networks that
are self-organizing and mathematically stable. Some good recent
references are:
(Category Learning):----------
G.A. Carpenter and S. Grossberg, "A Massively Parallel Architecture for
a Self-Organizing Neural Patttern Recognition Machine." Computer
Vision, Graphics, and Image Processing. In Press.
G.A. Carpenter and S. Grossberg, "Neural Dynamics of Category Learning
and Recognition: Structural Invariants, Reinforcement, and Evoked
Potentials." In M.L. Commons, S.M. Kosslyn, and R.J. Herrnstein (Eds),
Pattern Recognition in Animals, People, and Machines. Hillsdale, NJ:
Erlbaum, 1986.
(Learning):-------------------
S. Grossberg, "How Does a Brain Build a Cognitive Code?" Psychological
Review, 1980 (87), p.1-51.
S. Grossberg, Studies of Mind and Brain: Neural Principles of Learning,
Perception, Development, Cognition, and Motor Control. Boston:
Reidel Press, 1982.
S. Grossberg, "Adaptive Pattern Classification and Universal Recoding:
I. Parallel Development and Coding of Neural Feature Detectors."
Biological Cybernetics, 1976 (23), p.121-134.
S. Grossberg, The Adaptive Brain: I. Learning, Reinforcement, Motivation,
and Rhythm. Amsterdam: North Holland, 1986.
(Vision):---------------------
S. Grossberg, The Adaptive Brain: II. Vision, Speech, Language, and Motor
Control. Amsterdam: North Holland, 1986.
S. Grossberg and E. Mingolla, "Neural Dynamics of Perceptual Grouping:
Textures, Boundaries, and Emergent Segmentations." Perception &
Psychophysics, 1985 (38), p.141-171.
S. Grossberg and E. Mingolla, "Neural Dynamics of Form Perception:
Boundary Completion, Illusory Figures, and Neon Color Spreading."
Psychological Review, 1985 (92), 173-211.
(Motor Control):---------------
S. Grossberg and M. Kuperstein, Neural Dynamics of Adaptive Sensory-
Motor Control: Ballistic Eye Movements. Amsterdam: North-Holland, 1985.
If anyone's interested, I can supply more references.
------------------------------
Date: 1 May 86 14:40:02 GMT
From: ihnp4!think!craig@ucbvax.berkeley.edu (Craig Stanfill)
Subject: Re: connection machine articles
The Connection Machine has now been officially announced as a commercial
product. Requests for information relevant to AI should be directed to:
David Waltz
Knowledge Representation and Natural Language Group
Thinking Machines Corporation
245 First Street
Cambridge, MA 02142
Please use U.S. mail. When I get a chance, I will post some basic
specs for the machine on this list.
-Craig Stanfill
------------------------------
Date: 2 May 86 14:10:52 GMT
From: decvax!wanginst!apollo!molson@ucbvax.berkeley.edu (Margaret Olson)
Subject: Re: String Reduction
>requiring as TRAC does that strings be specifically called with
>the "cl" operator. In other words, you could say *(macro,...) instead
>of #(cl,macro,...). Wegner leaves it as an exercise to the reader to
In the version of TRAC that I worked with in 1983, you could say
#(macro) and ##(macro). As I recall, these two cases were treated
exactly like #(cl,macro) and ##(cl,macro). This version had a considerably
larger set of primitives than those discussed in all the TRAC papers and
documentation that I ever saw.
String reduction has been used to solve real problems. A company called
Data Concepts used TRAC to write an applications generator. The applications
generator was used by insurance raters to write rating systems. Rating systems
are hard because insurance rating rules change all the time (like every day as
far as I could tell). Anyway, TRAC was used for a real product. I think that
Allstate is still using this stuff for some kinds of commercial policies.
TRAC trivia: It was developed and originally owned by Calvin Mooers,
and then sold to Data Concepts Inc. Data Concepts has since gone
bankrupt, so I believe that TRAC is now owned by some type of
bankruptcy court entity. It is (presumably) for sale.
Margaret Olson.
molson@apollo
------------------------------
Date: 2 May 86 08:59:31 GMT
From: brahms!weemba@ucbvax.berkeley.edu (Matthew P. Wiener)
Subject: Re: Italo Calvino AI project
I've directed followups to net.books.
> I must apologize to Bandy for posting a genuine rumor to net.rumor, but
>this is a real rumor I found on net.followup:
>
>>I have it on good authority (although second-hand) that an entire
>>*novel* was generated by computer. It was the result of a research
>>project which aimed to "parameterize" an author's writing style. The
>>study concentrated primarily on one author, Italo Calvino, and I have
>>heard that the novel, "If on a winter's night a traveller", was actually
>>published and marketed with Calvino's blessing.
>[Jack Orenstein]
Now this is an interesting rumor. I suppose I should reread the book,
but I'll go on memory.
It's opening chapter struck me as one of the funniest things I have ever
read. But it then wore down rather tiresomely. I doubt if a computer
could have come up with the scheme of the book, the plot, or the opening
chapter. But as for the rest? The plotting was more stilted than usual
for Calvino--but I thought that was the point. The joke was dragged out
longer than he usually does. And it was his first novel in a decade.
Hmmm... Let's just say I'm very incredulous. Perhaps, more likely, the
*rumor* was generated with Calvino's blessing.
ucbvax!brahms!weemba Matthew P Wiener/UCB Math Dept/Berkeley CA 94720
------------------------------
Date: Sat, 3 May 86 21:29 EDT
From: ART@AQUINAS.THINK.COM
Subject: Italo Calvino
>I have it on good authority (although second-hand) that an entire
>*novel* was generated by computer. It was the result of a research
>project which aimed to "parameterize" an author's writing style. The
>study concentrated primarily on one author, Italo Calvino, and I have
>heard that the novel, "If on a winter's night a traveller", was actually
>published and marketed with Calvino's blessing.
[Jack Orenstein]
If On A Winter's Night A Traveller has, as part of it's plot, the
story of two people trying to read a novel called If On A Winter's
Night a Traveller. Among their troubles is their inability to determine
the authorship of the book. At one point, they discover that the book
they are reading (or you are reading; fans of self-referentiality will
have a ball) may have been written by a machine. Maybe this rumor's
authority confused the novel with it's plot. On the other hand, maybe that
was the point....
Art Medlar
Thinking Machines Corp.
------------------------------
End of AIList Digest
********************
∂07-May-86 0151 LAWS@SRI-AI.ARPA AIList Digest V4 #115
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 7 May 86 01:51:34 PDT
Date: Tue 6 May 1986 23:04-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #115
To: AIList@SRI-AI
AIList Digest Wednesday, 7 May 1986 Volume 4 : Issue 115
Today's Topics:
Query - Computer Resources Management,
Literature - Connection Machine Articles & U. Kansas Tech Reports,
Philosophy - Consciousness,
AI Approaches - Learning,
Linguistics - Trademarks
----------------------------------------------------------------------
Date: Mon 5 May 86 09:48:03-PDT
From: Jean-Pierre Dumas <DUMAS@SUMEX-AIM.ARPA>
Subject: Computer resources management
I post this for a friend.
I will forward any mail.
dumas@sumex
I am interested in computer system performance analysis modelling.
I am indeed developing a system, based on AI techniques, to deal with
tuning and performance planning of computer system management.
I would be delighted to be in touch with people concerned with this question.
Address :
Dr. Saddek BELAID
CISI-TELEMATIQUE
CEN-SACLAY BP 24
91190 GIF/YVETTE
FRANCE
Phone : (+33) 1 69 08 20 12
------------------------------
Date: 1 May 86 18:46:00 GMT
From: hplabs!hpfcdc!hpfcla!hpcnoe!jd@ucbvax.berkeley.edu
Subject: Re: connection machine articles
Daniel Hillis has written a book entitled
"The Connection Machine". It is available through the Library of Computer
and Informaiton Sciences (book club). I am sure that It can be found
elsewhere.I just recieved the book and It seems very readable if not
intriguing.
Hope I have Helped,
John Dye Hewlett Packard
{inhp4|hplabs}!hpfcla!hpcnoa!jd Colorado Networks Division
------------------------------
Date: 4 May 86 05:51:56 GMT
From: nike!topaz!harvard!think!bruce@ucbvax.berkeley.edu (Bruce J. Nemnich)
Subject: Re: connection machine articles
Also, if all you have read is the old AI memo, you should definitely
read the book, |The Connection Machine|, by Danny Hillis, published
last year by the MIT Press.
--
--Bruce Nemnich, Thinking Machines Corporation, Cambridge, MA
--bruce@think.com, ihnp4!think!bruce; +1 617 876 1111
------------------------------
Date: Fri, 2 May 86 10:16:26 CDT
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Tech reports - University of Kansas
Following is a list of recent technical reports which have
been issued by the department of Computer Science of the
University of Kansas in conjunction with research done in
the department's Artificial Intelligence Laboratory.
%A Frank M. Brown
%T Reasoning in a Hierarchy of Deontic Defaults
%I Department of Computer Science, University of Kansas
%R TR-86-2
%X A commonsense theory of reasoning is presented which models
our intuitive ability of reason about defaults involving both
deontic and doxastic logic. The concepts of this theory do not
involve fixed points or Kripke semantics but instead are explicitly
defined in a modal quantificational logic which captures the modal
notion of logical truth. An example involving derivations of
obligations from both a robot's beliefs and a hierarchy of deontic
defaults is given. To be published in the proceedings of the
1986 Canadian Artificial Intelligence Conference. 11 pp.
%A Frank M. Brown
%T Toward a Commonsense Theory of Nonmonotonicity
%I Department of Computer Science, University of Kansas
%R TR-86-3
%X A logical theory of nonmonotonic reasoning is presented which
permits a commonsense approach to defaults. The axioms and inference
rules for a modal logic based on the concept of logical truth are
described herein along with basic theorems about nonmonotonic
reasoning. An application to the frame problem in robot plan
formation is presented. To be published in the proceedings of the
Eight International Conference on Automated Deduction. 12 pp.
%A Frank M. Brown
%T A Comparison of the Commonsense and Fixed Point Theories
of Nonmonotonicity
%I Department of Computer Science, University of Kansas
%R TR-86-4
%X The mathematical fixed point theories of nonmonotonic reasoning
are examined and compared to a commonsense theory of nonmonotonic
reasoning which models our intuitive ability to reason about defaults.
It is shown that all of the known problems of the fixed point theories
are solved by the commonsense theory. The concepts of this commonsense
theory do not involve mathematical fixed points, but instead are
explicitly defined in a monotonic modal quantificational logic which
captures the modal notion of logical truth. 12 pp.
%A Frank M. Brown
%T An Experimental Logic Based on the Fundamental Deduction Principle
%I Department of Computer Science, University of Kansas
%R TR-86-5
%X Experimental logic can be viewed as a branch of logic dealing with
the actual construction of useful deductive systems and their application
to various scientific disciplines. In this paper we describe an
experimental deductive system called the SYMbolic EVALuator (i.e. SYMEVAL)
which is based on a rather simple, yet startling principle about deduction,
namely that deduction is fundamentally a process of replacing expressions
by logically equivalent expressions. This principle applies both to
logical and domain dependent axioms and rules. Unlike more well known
logical inference systems which do not satisfy this principle, herein is
described a system of logical axioms and rules called the SYMMETRIC LOGIC
which is based on this principle. Evidence for this principle is given
by proving theorems and performing deduction in the areas of set theory,
logic programming, natural language analysis, program verification,
automatic complexity analysis, and inductive reasoning. To be published
in the international journal Artificial Intelligence. 120 pp.
%A Frank M. Brown
%T Automatic Deduction in Set Theory
%I Department of Computer Science, University of Kansas
%R TR-86-6
%X A proof of the definability of ordered pairs in set theory is described
and discussed. This proof was obtained in an entirely automatic way using
the SYMEVAL deduction system and the SYMMETRIC LOGIC axioms. The analogous
point in this proof where other theorem proving methods and systems have
failed to prove this theorem are described. The ability of this system to
automatically derive one half of this theorem from the other half is also
discussed, thus showing that this kind of deduction system can be used to
produce answers other than just yes/no answers to mathematical questions.
24 pp.
%A Frank M. Brown
%T An Experimental Logic
%I Department of Computer Science, University of Kansas
%R TR-86-7
%X The fundamental deduction principle, SYMEVAL deductive system, and
SYMMETRIC LOGIC are introduced. Theorems are proved in the area of
set theory, complexity analysis and program verification. 17 pp.
%A Frank M. Brown
%T Logic Programming with an Experimental Logic
%I Department of Computer Science, University of Kansas
%R TR-86-8
%X In this paper we describe the experimental programming logic which
uses the deductive system SYMEVAL which is based on the fundamental
deduction principle. Theorems and deductions are performed in the area
of logic programming and then discussed as they relate to the above
principle. 18 pp.
------------------------------
Date: 29 Apr 86 16:19:45 GMT
From: tektronix!uw-beaver!fluke!ssc-vax!bcsaic!michaelm@ucbvax.berkeley
.edu (michael maxwell)
Subject: Consciousness
In light of the recent flurry of articles on consciousness (of computers,
toasters, Endomoeba histolitica...), some readers may be interested in a
recent book: "Animal Thinking", by Donald Griffin. I've just started reading
it, so I can't say much; but the author is an ethologist (=student of animal
behavior), and his contention is that many animals *are* concious.
--
Mike Maxwell
Boeing Artificial Intelligence Center
...uw-beaver!uw-june!bcsaic!michaelm
------------------------------
Date: 29 Apr 86 13:26:18 GMT
From: ucdavis!lll-lcc!lll-crg!seismo!mcvax!ukc!warwick!gordon@ucbvax.
berkeley.edu (Gordon Joly)
Subject: Plan 5 for Inner Space - A Sense of Mind.
The following project has been proposed. Design and implement an
Intelligent System with the following characteristics. The system
is to run in real time and be portable. Memory and processing units
must be in the same enclosure, run on low power and not overheat.
(a) Full colour binocular vision, with motion perception.
(b) Speech processing and speech synthesis.
(c) Natural language ability :-
(1) Semantic ability
(2) Translation.
(3) Ability to summarise.
(4) Humour (optional).
(d) Learning ability.
(e) Ability to control a large number of servo mechanisms, with strength
and sensitivity.
(f) Other tasks, as yet unspecified, but the system must be able
to cope with extra requirements, as and when the need arises,
using characteristic (d).
Queries: Time to completion? Cost?
------------------------------
Date: 5 May 86 11:33:00 MST
From: fritts@afotec
Reply-to: <fritts@afotec>
Subject: Re: Science 86 Article on Neural Nets
I N T E R O F F I C E M E M O R A N D U M
Date: 5-May-1986 10:18 MDT
From: STEVE E. FRITTS
FRITTS
Dept: 1859ISSS/SIP
Tel: 505-846-2595
TO: Remote Addressee ( ←MAILER!AILIST@SRI-AI )
Subject: Re: Science 86 Article on Neural Nets
Most of what I've read on this list appears to place AI closer to the
"Frankenstein" theory of assembling intelligence, fully formed and
functioning, like any other computer program; just push the button and
watch it go.
Neural networks appear to be more natural in their approach. Terry
Sejnowski's NETalk was equipped more with rules on how to learn to
perform a task than rules on how to perform a task. I think that this
is a crucial difference. If a computer is programmed only to perform
a task, then the programmer must design for every possible eventuality
which may affect the performance and for every possible consequence or
outcome. The problem is that such a program, no matter how
comprehensive, makes assumptions. These assumptions are fatal for
intelligence. They doom the program as surely as evolution dooms some
species of life on this planet, and for much the same reason. Human
intelligence may have developed as the ultimate weapon against
changing environments; better than adaptation because it allows for
greater variety of response.
So, if "intelligence" is developed in a computer through learning
mechanisms rather than assembled by means of cunning rules and
algorithms, perhaps it stands a better chance of achieving sufficient
universality that it may compete with the human mind. Odd that we
would dream of building our own competition.
I vaguely recall that a long time ago there were machines called analog
computers which worked on a principle of varying voltages and
resistances rather than the digital machine's method of detecting the
polarity (the "on" or "off" state) of a particular circuit junction.
Hopfield and Tank's neural net appears to perform in some ways similar
to an analog computer. The article is too general on the technicals
of a "neural net" machine and I add my request to others on this list
for a little better technical description. Also, perhaps someone will
enlighten me about the possible relevance or irrelevance of analog
computers to neural nets.
DISCLAIMER: My opinions are my own alone and do not represent any
official position by my employer.
Steve Fritts
FRITTS@AFOTEC.ARPA
------------------------------
Date: Fri 2 May 86 11:33:51-CDT
From: Gordon Novak Jr. <AI.NOVAK@R20.UTEXAS.EDU>
Subject: Xerox as a verb
I think it was William Safire who stated the metarule:
You can verb anything.
At least in Washington.
------------------------------
Date: 2 May 86 13:30 PDT
From: Stern.pasa@Xerox.COM
Subject: Trademarks
There's language and language. Yes, there is a lexical distinction
between xerox and Xerox, but does the law recognize lexical vs other
discriminations between usages? Yes, sort of, maybe.
One might use syntactic or semantic analysis of usages to determine
whether it is acceptable to use a trademarked word, but NL workers know
how difficult it is to produce an absolute semantic analysis of
unrestricted language.
So instead, companies protecting trademarks work at a higher level of
abstraction, the real-world script. They know that any unrestricted use
of a word leads to bad consequences, so their heuristic for when to
complain is based on factors exogenous to the apparent linguistic
content in which the reference appears.
In conclusion it is fascinating that the legal protection of trademarks
should involve such a wide variety of linguistic considerations. There
are no easy answers to the theoretical questions involved, but as KIL
points out, there are laws.
P.S. None of the above is to be taken as the position of my employer.
Josh
------------------------------
Date: Fri 2 May 86 17:24:26-PDT
From: Pat Hayes <PHayes@SRI-KL>
Subject: Re: Names and Trademarks
Of course, I understand that the company is legally obliged to correct
uses of its name in order to retain trademark status. My point was that
as a matter of fact, like it or not, to xerox is now a verb in common
usage, and is going to stay that way. I have a dictionary ( Random
House ) in which Xerox(TM) is both a noun and a verb, and xerography (
no capitalisation or trademark ) is another noun. Clearly, the company
insisted that the publishers gave trademark acknowledgement: equally
clearly, both the publishers and the company acknowledge that the word is
part of the language.
There are important differences between xerox and asprin on one hand, and
exxon and frigidaire and IBM on the other. In these latter cases, identification
was due to the company having a dominating position in the market, and
nothing else. In the former cases, it was the only owner and supplier of
a vital piece of technology, during the period in which it became a
commonplace of everyday life, and indeed transformed everyday life. And
finally, a last difference is that in both the former cases there was and
is no alternative way of referring to the things available. What is a Canon
xerox machine if it isnt that? It has to be some awkward neologism like
a dry copier, or a copier using the xerographic process. What would we call
an asprin if we couldnt call it an asprin? Like Bayer, you guys have been
too successful: not only did you invent the process, you also invented the
way of talking about it.
Patrick J. Hayes (TM)
PS> I am told that in the oil business, to schlumberger is a common verb.
------------------------------
Date: Sat 3 May 86 02:20:42-PDT
From: Lee Altenberg <ALTENBERG@SUMEX-AIM.ARPA>
Subject: "Exxon" vs. "exon" .
I remember reading in Time Magazine around 1970 when ENCO/ESSO Oil Corp.
unveiled its new name, "Exxon". They had looked through all the major
languages of the world to find one word that didn't mean anything in any of
them. "Exxon" was what they found.
Alas. In 1978, Walter Gilbert of Harvard came up with new words
for the newly discovered pieces that eukaryotic genes are made of: "intron"
for the spliced out sequences, and "exon" for the expressed sequences.
I wonder what went through the minds of the CEOs of Exxon when they caught
wind of this?
I should mention a sign I saw in the Bioengineering Dept. at UC
Berkeley:
--AXXON--
MOTOR NEURON SERVICE
-Lee Altenberg
------------------------------
End of AIList Digest
********************
∂08-May-86 1417 LAWS@SRI-AI.ARPA AIList Digest V4 #116
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 May 86 14:17:27 PDT
Date: Thu 8 May 1986 10:14-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #116
To: AIList@SRI-AI
AIList Digest Thursday, 8 May 1986 Volume 4 : Issue 116
Today's Topics:
Seminars - Planning, Knowledge, and Action (UPenn) &
Knowledge Engineering as Ontological Analysis (SU) &
Default Theories and Autoepistemic Logic (CSLI) &
NL Database Query Systems (UPenn) &
Sequential and Parallel Inference Machines (Edinburgh) &
Ulysses Expert-System VLSI Design Environment (UPenn) &
Granularity (SRI) &
Eazyflow: an Effective Alternative to Dataflow (CMU),
Conference - Workshop on Intelligent Interfaces at AAAI-86
----------------------------------------------------------------------
Date: Mon, 5 May 86 13:56 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Planning, Knowledge, and Action (UPenn)
Colloquium - University of Pennsylvania
3:00pm May 6, 1986
216 Moore School
A FIRST ORDER THEORY OF PLANNING, KNOWLEDGE, AND ACTION
LEORA MORGENSTERN - NEW YORK UNIVERSITY
Most AI planners work on the assumption that they have complete knowledge of
their problem domain and situation, so that formulating a plan consists of
searching through some pre-packaged list of action operators for an action
sequence that achieves some desired goal. Real life planning rarely works this
way, because we usually don't have enough information to map out a detailed
plan of action when we start out. Instead, we initially draw up a sketchy plan
and fill in details as we proceed and gain more exact information about the
world.
This talk will present a formalism that is expressive enough to describe this
flexible planning process. We begin by discussing the various requirements
that such a formalism must meet, and present a syntactic theory of knowledge
that meets these requirements. Next, we discuss the paradoxes, such as the
Knower Paradox, that arise from syntactic treatments of knowledge, and propose
a solution to these paradoxes based on Kripke's solution to the Liar Paradox.
Finally, we give solutions to the Knowledge Preconditions and Ignorant Agent
Problems as part of an integrated theory of planning.
The talk will include comparisons of our theory with other syntactic and modal
theories such as Konolige's and Moore's. We will demonstrate that our theory
is powerful enough to solve classes of problems that these theories cannot
handle.
------------------------------
Date: Mon 5 May 86 17:20:08-PDT
From: Christine Pasley <pasley@SRI-KL>
Subject: Seminar - Knowledge Engineering as Ontological Analysis (SU)
CS529 - AI In Design & Manufacturing
Instructor: Dr. J. M. Tenenbaum
Title: Knowledge Engineering as Ontological Analysis
Speaker: Patrick Hayes
From: Schlumberger Palo Alto Research
Date: Wednesday, May 7, 1986
Time: 4:00 - 5:30
Place: Terman 556
When designing a knowledge-base for use by an AI system, it is important to
bear in mind how utterly stupid computers are. We must provide them with a
vocabulary in which to think about their world, and the scope of their thoughts
is then limited by the expressiveness of this vocabulay: in particular, the
kinds of object it is able to talk about. This talk will illustrate this
point, and emphasise how important it is to choose the representational
vocabulary to fit both the limitations and the range of the systems desired
abilities. Ways of referring to times and events will be used as examples.
Visitors welcome!
------------------------------
Date: 05 May 86 1625 PDT
From: Vladimir Lifschitz <VAL@SU-AI.ARPA>
Subject: Seminar - Default Theories and Autoepistemic Logic (CSLI)
ON THE RELATION BETWEEN DEFAULT THEORIES AND AUTOEPISTEMIC LOGIC
Kurt Konolige
SRI International and CSLI
Common Sense and Non-Monotonic Reasoning Seminar
Thursday, May 8, 4pm
MJH 252
Default theories are a formal means of reasoning about defaults: what
normally is the case, in the absence of contradicting information.
Autoepistemic theories, on the other hand, are meant to describe the
consequences of reasoning about ignorance: what must be true if a
certain fact is not known. Although the motivation and formal
character of these systems are different, a closer analysis shows that
they bear a common trait, which is the indexical nature of certain
elements in the theory. In this talk I will show how default theories
can be reanalyzed as a restricted type of indexical theory. The
benefits of this analysis are that it gives a clear (and clearly
intuitive) semantics to default theories, and combines the expressive
power of default and autoepistemic logics in a single framework.
------------------------------
Date: Tue, 6 May 86 21:40 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - NL Database Query Systems (UPenn)
Forwarded From: Naoki Abe <Abe@UPenn> on Tue 6 May 1986 at 18:18
A REMINDER OF A COLLOQUIUM
Thursday 5/8, 3:00pm, 216 Moore School
There will be an interesting talk on natural language database query systems
by Dr. Stanley R. Petrick of I.B.M. Thomas J. Watson Research Center. Dr.
Petrick is a former president of the Association for Computational
Linguistics, and is known for developing the first parsing algorithm for
transformational grammars, characterizing various parsing algorithms for
context free grammars in terms of push down automata, as well as his earlier
work on the minimal covering problem and its application on speech
recognition. In this talk he will discuss more practical issues concerning
natural language query systems. The following is the abstract of this talk.
Natural Language Database Query Systems
Dr. Stanley R. Petrick
Thomas J. Watson Research Center, I.B.M.
In recent years many computer systems have been developed with limited
capabilities for understanding natural language requests for information
from a given database and for responding appropriately. In this talk we
shall attempt to characterize the theory underlying these systems and the
level of performance that they have demonstrated. Special attention will be
given to the problem of customizing such systems to handle new databases.
Illustrative material will be drawn from the T.Q.A. (Transformational
Question Answering) system, an experimental prototype being developed at
I.B.M. Research.
------------------------------
Date: Tue, 6 May 86 10:18:52 -0100
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@Cs.Ucl.AC.UK>
Subject: Seminar - Sequential and Parallel Inference Machines (Edinburgh)
EDINBURGH AI SEMINARS
Date: Wednesday, 7th May l986
Time: 2.00 p.m.
Place: Department of Artificial Intelligence
Seminar Room - F10
80 South Bridge
EDINBURGH.
Professor David H.D. Warren, Department of Computer Science, University
of Manchester will give a seminar entitled - "Sequential and Parallel
Inference Machines".
There is a growing interest, stimulated in large part by Japan's Fifth
Generation project, in computer architectures where the basic machine
language is a form of symbolic logic, and the basic machine operation
is a form of logical inference. Prolog is the best known, but not the
only, example of such a language.
How fast can such machines run? I will consider both sequential
machines, which perform only one logical inference at a time, and
parallel machines, which can perform more than one logical inference at
a time.
First, I will describe my work with Evan Tick on the design of a Prolog
instruction set and pipelined processor. This work suggests that a
sequential Prolog machine can achieve a speed approaching one million
logical inferences per second (IM LIPS) with current device technology.
This estimate has been confirmed by experimental prototypes, produced
at Berkeley and NEC.
In the second part of the talk I will discuss various approaches to
exploiting parallelism, including the Argonne approach to
or-parallelism, and the approach of DeGroot and others to
and-parallelism.
------------------------------
Date: Wed, 7 May 86 14:31 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Ulysses Expert-System VLSI Design Environment (UPenn)
Electrical Engineering Colloquium - University of Pennsylvania
11:00am May 9, 1986 - 129 Pender Lab
Ulysses -- An Expert-System Based VLSI Design Environment
Michael l. Bushnell
Carnegie-Mellon University
It has recently been observed that the initial engineering design cost
for VLSI circuits is beginning to exceed the lifetime production cost.
In order to reduce this prohibitive design cost, which is limiting the
practical applications of VLSI technology, we need a real increase in
the automation of design activities. Ulysses is a VLSI CAD environment
which effectively addresses the problem of CAD tool integration and
which also allows further automation of the VLSI design process. The
goal of this environment is to raise the designer interface for CAD
systems from the CAD tool level to the design task level. The environment
is intended to be used in design synthesis, design-for-testability,
analysis, verification and optimization activities at all levels of
VLSI design. Specifically, Ulysses alleviates the problems caused by
incompatible file formats for CAD tools, allows one to codify a design
methodology, allows the methodology to be semi-automatically executed
and allows the VLSI design space to be explicitly represented. The
environment automatically executes existing CAD tools, according to
instructions expressed in the codified design methodology, in order to
accomplish design tasks. Ulysses keeps track of the progress of a
design and lets the designer explore the design space. Ulysses uses
Artificial Intelligence methods, functions as an interactive expert
system, and interprtets language descriptions of design tasks, which
are described in the Scripts language. Alternatively, the Scripts
language may be viewed as an organization-structuring language for CAD
applications in engineering. An example of an IC layout design task
will be presented, in which a knowledge-based router, a layout
synthesizer, and an interactive floor planner will be controlled by
Ulysses in a non-deterministic and opportunistic fashion in order to
produce a viable IC layout from a circuit description expressed in the
logic element/transistor level in a hardware description language.
------------------------------
Date: Wed 7 May 86 14:51:37-PDT
From: LANSKY@SRI-AI.ARPA
Subject: Seminar - Granularity (SRI)
GRANULARITY
Jerry R. Hobbs (HOBBS@SRI-AI)
Artificial Intelligence Center, SRI International
CSLI, Stanford University
11:00 AM, MONDAY, May 12
SRI International, Building E, Room EJ228 (new conference room)
We look at the world under various grain sizes and abstract from it only
those things that serve our present interests. We can view a road,for
example, as a line, a surface, or a volume. Such abstractions enable us
to reason about situations without getting lost in irrelevant
complexities. Knowledge-rich intelligent systems will have to have
similar capabilities. In this talk I will present a framework in which
we can understand such systems. In this framework, a knowledge base
consists of a global theory together with a large number of relatively
simple, idealized, grain-dependent local theories, interrelated by
articulation axioms. In a complex situation, the crucial features are
abstracted from the environment, determining a granularity, and the
corresponding local theory is selected. This is the only computation
done in the global theory. The local theory is then applied in the bulk
of the problem-solving process. When shifts in perspective are
required, articulation axioms are used to translate the problem and
partial results from one local theory to another. In terms of this
framework, I will discuss idealization, the concepts of supervenience
and reducibility, prototype-deformation types of description, and the
emergence of global properties from local phenomena, and the
relationship of granularity to circumscription. Several examples of
uses of this framework from a wide variety of applications will be
given.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
------------------------------
Date: 6 May 1986 0754-EDT
From: Theona Stefanis@A.CS.CMU.EDU
Subject: Seminar - Eazyflow: an Effective Alternative to Dataflow (CMU)
PS SEMINAR
Edward Ashcroft, SRI
Date: Monday, 12 May
Time: 3:30
Place: WeH 5409
Title: "Eazyflow: an Effective Alternative to Dataflow"
Eazyflow is a evaluation strategy for Operator Nets. Syntactically,
operator nets are similar to dataflow graphs. Their semantics is
expressed mathematically, and is more general and elegant than the
semantics of dataflow networks. Various ways of specifying their
operational semantics are possible, and eazyflow is one such way, that
is a hybrid of demand-driven and data-driven evaluation. (Data-driven
evaluation is what is normally called dataflow. Demand-driven
evaluation avoids a lot of the problems that dataflow has. Eazyflow is
basically demand-driven, with data-driven computation taking place when
it can do so without causing too many problems.)
This talk will
describe operator nets and their mathematical semantics, indicate how
they correspond exactly to programs in the language Lucid, show how
eazyflow is often superior to dataflow, and briefly describe the
architecture of the eazyflow engine that is soon to be built at SRI (the
Eazyflow Architecture Project is currently part of the DARPA Strategic
Computing Program). Also, some simulation results that have been
obtained for the architecture will be described.
------------------------------
Date: 6 May 1986 12:02-PDT
From: Neches@isi-vaxa.arpa (Robert Neches)
Subject: Conference - Workshop on Intelligent Interfaces at AAAI-86
A workshop on Intelligent Interfaces is scheduled to be held on Thursday,
August 13th, as part of the AAAI conference in Philadelphia. We would
like to bring the call for abstracts to your attention, and would appreciate
it if you would circulate it to anyone else who might find it of interest.
-- Tom Kaczmarek (Kaczmarek@USC-ISIB.arpa)
Bob Neches (Neches@ISI-Vaxa.arpa)
Workshop Co-chairs.
USC / Information Sciences Institute
4676 Admiralty Way
Marina del Rey, CA 90292
(213) 822-1511
Questions may be addressed to either chairman; abstracts should be sent to
Tom Kaczmarek by June 15. The complete call for abstracts follows.
*****************************************************************************
WORKSHOP ON INTELLIGENT INTERFACES AT AAAI-86
Many AI techniques are applicable to building better human-machine
interfaces. The purpose of this workshop is to investigate intelligent
interface techniques that can potentially span many interaction modalities.
The workshop will discuss interfaces to knowledge-based systems as well
conventional interactive systems. Past work in this area has been directed
at using AI to provide either an "intelligent apprentice" or a collection of
"power tools." The intelligent apprentice emphasizes assistance based on an
understanding of the user's intentions and task domain. The power tool
approach emphasizes a powerful command set, but leaves the responsibility
for selecting and applying commands in the hands of the user. This workshop
is concerned not just with the extremes of this dichotomy, but also with
work that shows how to blend the two approaches effectively. Work on
specific media and modalities, (e.g., natural language text or speech
understanding) is also relevant in that it can provide abstractions of
understanding and generation that will be potentailly useful across a wide
range of interface media and modalities.
Topics to be discussed:
What are the fundamental interface problems that AI can help solve?
What specific AI techniques can be useful in solving these problems?
What abstractions of "understanding" and "generation" can come from
work on natural language text and speech?
What are the possibilities for symbiotic relationships between
intelligent interfaces and intelligent systems?
What does it take to create intelligent interfaces to conventional
interactive systems?
Are the power tools and intelligent assistance approaches at odds
with one another? Are middle-of-the-road approaches
motivated by pragmatism or principle?
Organizers: The workshop organizers are Thomas Kaczmarek, Larry Miller,
Robert Neches and Norman Sondheimer of the USC/Information Sciences
Institute.
Participation: The workshop will run for a full day on Thursday, August 13
at the University of Pennsylvania. The format will be a combination of
short informal presentations and open discussions with the former being used
to stimulate the latter. These will be organized in four sessions, the
topics of which will be finalized after reviewing the declared interests of
participants. Attendence will be by invitation only; there will be a
maximum of 50 participants. Those wishing to participate should submit four
copies of a 1000-word abstract describing either their work building
intelligent interfaces or a position on a topic relevant to the goals of the
workshop. Abstracts should provide contact information at the top, as they
will be duplicated and distributed to the other workshop attendees.
Participants with a willingness to make a short presentation (15-30 minutes)
about either their research or a position on a relevant topic should
indicate this desire in a cover letter sent with the abstract. If multiple
members of a research group would like to attend, please indicate the
number involved in the cover letter also. Abstracts should be sent to
Thomas Kaczmarek, USC/ISI, 4676 Admiralty Way, Marina del Rey, CA.
90292-6695. They may also be transmitted electronically to
Kaczmarek@USC-ISIB.arpa. The deadline for submission of abstracts is June
15, 1986. Invitations will be issued by July 15.
------------------------------
End of AIList Digest
********************
∂08-May-86 2356 LAWS@SRI-AI.ARPA AIList Digest V4 #117
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 8 May 86 23:55:22 PDT
Date: Thu 8 May 1986 21:21-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #117
To: AIList@SRI-AI
AIList Digest Friday, 9 May 1986 Volume 4 : Issue 117
Today's Topics:
AI Tools - Common Lisp Support for Object-Oriented Programming
----------------------------------------------------------------------
Date: 29 Apr 86 11:26:00 PDT
From: "Jennings, Richard" <jennings@lll-icdc.ARPA>
Reply-to: "Jennings, Richard" <jennings@lll-icdc.ARPA>
Subject: Summary: Common Lisp Support for Object Oriented Programming
I posted a query to AILIST concerning available PUBLIC DOMAIN object
oriented support within the Common Lisp programming environment. This
article should serve to disseminate to the network the information I
obtained. FIRST, my original query is reproduced. SECOND, is a summary of
specific references (network addresses) to whom follow-up questions should
be directed. THIRD is an edited summary of some of the responses I
recieved. Finally, I am providing some further information about the
project I am supporting, because of numerous requests.
I. Original Query:
Article 469 of mod.ai:
Subject: Object Oriented Support For Common Lisp
Date: 24 Apr 86 06:46:40 GMT
I am working on a project trying to couple a good programming
environment exploiting object oriented paradigms to a grid of INMOS
Transputers. Rather than build up everything from the OCCAM
development system, I would like to use the VAX LISP (a variant of
Common Lisp) environment augmented with a public domain (preferably)
object oriented package as a model for the system I intend to build
for the Transputers.
1) I would like pointers to environments which are compatible (sit on
top of) VAX LISP which directly support object oriented programming;
2) notes from those who may be working on (or interested in) such
projects; and
3) responses sent directly to me since I do not have regular access to
AILIST. I will summarize.
II. Summary of Network Information Sources
CORBIT => desmedt%hnykun52.BITNET@wisc.wisc.edu
PD Common Lisp => fahlman@c.cs.cmu.edu
Common Loops => gregor.pa@xerox.com
REX => wells@sri-ai.arpa
NCUBE => duke@mitre.arpa
VAXLisp Flavors => beer%case.CSNET@csnet-relay.arpa
III. Summary of Responses
===============================================================================
[1] Subject: OOPS for Lisp and VAXLISP on Transputer
Return-Path: <Fischer.pa@Xerox.COM>
The lisp sources that VAXLISP was built from are public domain ... These are
the Spice Lisp sources from CMU, contact Fahlman@c.cs.cmu.edu.
There are a few object programming standards emerging within the Common
Lisp community. A partial list is: Xerox CommonLOOPS, New Flavors
(Symbolics), Common Objects (HP Labs), and Object Lisp (LMI).
Group Information Contact
CommonLOOPS Gregor.pa@Xerox.Com
New Flavors Moon@SRC.Symbolics.Com
Common Objects Snyder@HPLabs.Com
Object Lisp ? [anybody know about this? -rkj]
[A message to Scott Fahlman obtained:]
Subject: Public Domain VAX LISP
Return-Path: <FAHLMAN@C.CS.CMU.EDU>
What we could give you is Spice Lisp, developed here for the Perq computers and
in the public domain. This is essentially Common Lisp written in Common Lisp
and a Common Lisp compiler, written in Common Lisp, that produces a special
Lisp-oriented byte code that is executed by custom microcode on the Perq. We
also have an Emacs-like text editor written in Common Lisp, and some assorted
utilities and demos. There's a design document describing the internal
organization and defining the byte codes.
What various manufacturers, including DEC, have done is to take our sources and
modify the code generation modules of the compiler to produce native code for
their machine. Most of the byte codes turn into a few native-code instructions
on something like a Vax. The more complex ones turn into quick jumps to
hand-coded subroutines, probably written in assembler: bignum arithmetic,
building and taking apart stack frames, GC, lowest-level operating system
interfaces, and so on.
To build a system, you then run the modified compiler on some system that
already has a Common Lisp, compile our whole body of Lisp code, link in the
hand-coded stuff, and if you've done it all right the Lisp comes up in its full
glory, ready to go. It is possible to get a reasonably fast implementation by
this route, though it takes a good deal of thought and tuning to get the data
formats and calling conventions just right.
Veterans of our implementation effort have done this port to a new architecture
in as little as six man-months ... under optimal conditions. A more realistic
time scale for someone starting from scratch on this would be two good wizards
working six months to get something turning over, and another six months or a
year to get the thing up to full speed and product quality. That's for a
straightforward port ... All of the sources add up to 3 or 4 megabytes...
Since you ask about Vaxlisp, it is the result of the same process described
above, and the work was done here at CMU by DEC employess and by me as a
consultant to DEC. The result is owned by DEC and we can't give it to anyone.
-- Scott
===============================================================================
[2] Subject: CORBIT - an object-oriented programming system
Return-Path: <DESMEDT%HNYKUN52.BITNET@wiscvm.wisc.edu>
You may be interested in CORBIT, an object-oriented programming
environment in Common LISP. I will summarize some aspects of the system.
1. History
CORBIT stands for ORBIT in Common LISP. ORBIT was originally written in
MACLISP by Luc Steels at Schlumberger, then rewritten in FRANZ LISP by
Steels and myself, and finally rewritten in NIL Common LISP by myself at
the University of Nijmegen. ORBIT and CORBIT are now used by a small
number of people in the scientific community and are not commercially
available.
2. What is it?
CORBIT is basically an object-oriented extension of LISP. As such, it
ranks among the Flavors package, Common Loops, etcetera. However, it has
some features which make it stand out from the pack:
- Inheritance is done by delegation rather than by copying down. See
recent articles by Henry Lieberman for a discussion of this distinction.
- Partly as a result of this, the system is much more flexible than most
similar systems with respect to adding and changing information. For
example, one can create instances of objects that don't exist yet.
- There is no formal distinction between 'classes' and 'instances'.
Everything is an 'object'. The only hard distinction is between named
objects and anonymous objects.
- Invocation of an object-oriented operation is performed by plain LISP
function calling, not by message passing (as done in Common Loops).
- There is no distinction between so-called 'instance variables' and
'methods'. Everything is accessed by means of a function.
- There are a number of fancy extra features such as 'if-needed' methods
and backpointers.
3. Applications
Applications have been very much of a pre-prototypical nature so far.
ORBIT has been used for VLSI-design, representation of geological
knowledge, implementation of a (rather primitive) window system, natural
language generation, and a small office environment.
4. Accessibility
The only two papers related to CORBIT are really related to its
predecessor ORBIT: (1) a manual (slightly outdated, published as an
internal report, now out of print but still available on electronic
medium) and (2) a forthcoming article which compares the ORBIT and
Flavors systems.
Koenraad De Smedt Bitnet address: DESMEDT@HNYKUN52
University of Nijmegen
Psychological Lab
Montessorilaan 3
6525 HR Nijmegen
The Netherlands
[Koenraad sent me a copy of the report, ~43p, which I am now reading -rkj]
===============================================================================
[3] Subject: Portable CommonLoops
Return-Path: <Gregor.pa@Xerox.COM>
Portable CommonLoops (PCL) is an implementation of CommonLoops written entirely
in CommonLisp. Currently, PCL runs in the following Common Lisps:
Xerox
Symbolics
Lucid
Spice
TI
VAXLisp
PCL is available to Arpanet sites by anonymous FTP (username "anonymous",
password "anonymous"). For the time being we are restricting distribution to
sites which can FTP PCL for themselves because we want to make it possible to
have frequent new releases of PCL.
The files are stored on PARCVAX.xerox.com. You can copy them using
anonymous FTP. There are several directories which are of interest:
/pub/pcl/ PCL sources and (some) documentation
These following directories contain binary files for some
of the machines PCL runs on, there will be more machine
specific directories once we are set up to get more binaries.
/pub/pcl/3600/ binaries for the 3600 (rel 6.1)
/pub/pcl/lucid/sun/ binaries for Lucid Lisp on the SUN (rel 1.0)
/pub/pcl/ti/ binaries for the TI Explorer
...
In the directory /pub/pcl/ the files:
notes.tx contains notes about the current state of PCL, and some
instructions for installing PCL at your site. You should
read this file whenever you get a new version of PCL.
manual.tx is a VERY ROUGH [very rough -rkj] pass at "documentation".
I hope there will be some better documentation soon.
Send mailing list requests or other administrative stuff to:
CommonLoops-Coordinator@Xerox.com
[I obtained the VAXLisp sources, and am in the process of bringing them
up. They seem to be written for VAXLisp 2.0 under Ultrix; we have
version 1.1 running under VMS. I should know in a week or so if I
can get CommonLoops up - rkj]
===============================================================================
[4] Subject: VAXLisp Flavors
Return-Path: <beer%case.CSNET@csnet-relay.arpa>
Here at the Center for Automation and Intelligent Systems Research we have
developed a number of tools and utilities for VAX LISP, one of which is an
implementation of Flavors. The distribution details are a bit vague right now,
but it looks like the object code will be public domain for a tape and a small
handling fee ($5 or $10).
The Flavors implementation is fairly complete except for a smaller number of
method combination types. It also currently depends on a rather hacked up
dynamic closure implementation for VAX LISP. However, we are currently working
on a more complete and efficient implementation which is also closer to "New
Flavors". This new implementation will not require dynamic closures.
I will be releasing a report sometime in the coming month describing all of
these facilities ... A number of other VAX LISP tools and utilities are also
described, such as a pattern-based top-level history mechanism, a pattern-based
apropro facility, an extensible top-level command facility, and an extensible
DESCRIBE facility. I'll probably post a message about the availability of
these facilities on AIList...
Randall D. Beer
Center for Automation and Intelligent Systems Research
Case Western Reserve University
Glennen Bldg. Room 312
Cleveland, OH 44106
(beer%case@CSNet-Relay.ARPA)
===============================================================================
[5] Subject: Object Oriented Transputer Programming
Return-Path: <duke@mitre.ARPA>
However, I question whether Common Loops, or probably any available
package, is likely to aid your project. You state that you intend to use
your object-oriented package (OOP) to model the system you intend to build
for the transputer net.
[My objective is use existing concepts (from Common
Lisp, Common Loops etc) as a pattern for
incorporating parallelism (via transputers) into
computer aided engineering workstations -rkj]
First of all, a good package for the VAX will probably have more features
than you are likely to want to develop for the Transputer (unless you plan
a large and expensive project). Also, you would need a full implementation
of Common Lisp on the transputer if you were going to try to port large
parts of the code from the VAX OOP to a transputer OOP. My concern would
be that the model implemented on the VAX would be too different from what
you would be able to implement on the transputer...
[Our concept consists of a VAXStation II/GSX
*augmented* by a transputer array (later to
evolve to Application Specific Integrated Circuits).
VAXLisp/Common Lisp runs on the VAXStation and
uses transputer networks as peripherals. The
only parts of Common Lisp which need to migrate
to the transputers are the parts required to
provide an overall workstation environment which
can be *fully understood* with CL/OOP concepts:
-rkj]
Here at Mitre in McLean, VA we are working on a project for doing
object-oriented programming on the BBN Butterfly. Our application is a
battlefield simulation. BBN has just begun beta-testing their parallel
lisp dialect at three locations in the US. One of the test sites is the U.
of Maryland, and we will be using their machine until we acquire our own
Butterfly later this year. The Lisp dialect on the Butterfly is like
Scheme, but BBN is modifying it towards Common Lisp. BBN has some people
working on adapting CommonLoops for their parallel environment
(shared-memory), but I believe that is a very ambitious project and I
question whether they will complete it by their planned date of mid-87.
Since we require an OOP for our project before that time, I have written a
small one in Scheme. Since there are only two people on our project, we
have to limit the scale of our efforts. However, I am quite happy with my
OOP, which has only been running the past few weeks. It has a substantial
array of features implemented with a small amount of code (~1000 lines). I
feel that its small size will make it easier to modify for a shared-memory
parallel environment. If you are planning to implement a Lisp dialect for
the transputer, you might consider Scheme. It is a simple language, and
there has been some standardization of it. MIT distributes a version
called CScheme (implementation of Scheme in C) that could possibly be
useful to you. [seems to be UNIX as opposed to VMS oriented]...
... I have recently learned more about the NCUBE computer, and it sounds
very impressive for a hypercube architecture. It uses a special VLSI chip
which implements a VAX-like CPU with floating point and communication
channels, so it has a level of integration similar to the transputer. The
NCUBE chip is supposed to be about twice the speed of the mid-range VAXes
(780?). Nodes are currently configured with 128K RAM, and each node is made
up of only seven chips, six memory and the special custom VLSI. In
addition to their complete systems (up to 1K processors), they sell a four-
processor board which will plug into an IBM AT. You can put up to four of
the boards in the AT. The prices I was quoted were about $10K for each
board and about $5K for software licenses. The AT software environment is
identical to their large machines. I think their operating system is
UNIX-like and they have Fortran and C languages.
[We are not tied to the Transputer; its a convenient
integrated circuit to learn with -rkj]
Duke Briscoe
duke@mitre
===============================================================================
[6] Subject: object... inmos...
Reply-To: WELLS@SRI-AI
My group at SRI (AI center, mobile robot) is developing code for a mobile
robot in a language which is basically a high level machine description
language which combines functional and declarative style (with all
unification happening at compile time). This language is nice for
expressing parallel architectures. Currently we compile to C programs for
sequential simulation. Someday we'd like to run these programs on a
parallel machine. I've toyed with the idea of hooking up a bunch of inmos
chips. The thought of the systems work is pretty intimidating. The
language we're using (REX) is implemented in common lisp.
--Sandy Wells
[REX is not in the public domain, but a report describing it may
be soon available -rkj]
===============================================================================
Others provided me with very helpful information, (especially
lanning.pa@Xerox), though not of general net interest.
←←←←←←
IV. Task Elaboration
The project I am working on has been going on here for much longer
than the two months I have been a LLNL. So, I will provide a brief
overview of what I think we are trying to accomplish and then mail
pointers to other members of our team.
Basically, we are trying to improve the productivity of LLNL engineers
in areas where commercial products are not yet available. We have two
major efforts now underway 1) EAGLES and 2) the Systolic Array Project.
Dennis Obrien (Obrien@lll-icdc) manages both these efforts (among other
things).
Eagles is a user environment which allows multiple software coded to
be used through a single window oriented user interface. Using Objective
C, an object oriented preprocessor for C marketed by PPI, a set of tools
have been developed to bind interactive languages, graphics routines,
matrix editors, help systems all smoothly together so users need only
learn one interface. This system (for Control Engineers) has just been
released to beta test. Brian Lawver manages this effort. Queries should
probably be sent to Obrien@lll-icdc, since needless to say, Brian will
be quite busy for a while.
The Systolic Array Project will evaluate the Transputer chip, build
two test boards, design a processor and interface board which can be
plugged into a VAXStation. The test boards are running, and the processor
board has been designed, pending the conclusion of transputer testing.
The current plan is to put 2x8 arrays of transputers on each board (4
boards initially), each with 128KB of memory. Board layout should start
in May, and we hope to have something running by August. Tony Degroot
(Degroot@lll-icdc) is managing the Systolic Array Project, and Eric
Johansson (johansson@lll-icdc) is most intimately involved with the
hardware design.
My job is to tie these two projects together, and I would like to do
it with common lisp, because I think over the long term there will be
more industry support for Common Lisp than any other plastic, interactive
environment. But I have to convince Brian that objects can be supported
as well by Common Lisp extensions as they are in Objective C, and I
have to convince Tony that Transputers can be programmed (at a minimum)
from a Common Lisp environment as well as they can be programmed in
OCCAM and its development environment.
My primary interest is in 1) defining a Common Lisp kernel, 2) extending
the kernel to support object oriented programming exploiting parallel
hardware, 3) defining an architecture which cleanly binds the kernel to the
parallel hardware, 4) implementing Common Lisp within this architecture.
Insights, collaborations, (or perhaps an occasional constructive criticism
on a good day) are solicited and will be appreciated.
+-----------------------------------------------------------------------------+
|Richard Jennings Arpa: (new) jennings@lll-icdc |
|POB 808 L-228 |
|LLNL, Livermore CA 94550 |
+-----------------------------------------------------------------------------+
Computer Aided Engineering
Dennis Obrien
Systolic Array Applications
Tony Degroot
Computer Aided Engineering Workstations
Brian Lawver
Supporting Objects With Parallel Hardware
Richard Jennings
Richard Jennings
PO Box 808 L-228 (L-228 is CRITICAL)
LLNL
Livermore, CA 94550
ARPA: preferred -> jennings@lll-icdc
slow, reliable -> jennings@lll-crg
(INMOS is a company which has probably trademarked OCCAM and
TRANSPUTER)
------------------------------
End of AIList Digest
********************
∂09-May-86 0251 LAWS@SRI-AI.ARPA AIList Digest V4 #118
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 May 86 02:50:53 PDT
Date: Thu 8 May 1986 21:44-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #118
To: AIList@SRI-AI
AIList Digest Friday, 9 May 1986 Volume 4 : Issue 118
Today's Topics:
Queries - AI Applications in Libraries & Expert Systems Advice &
Common Lisp for PC,
AI Tools - More DOS-Based Software Tools for AI,
Approaches - Neural Networks,
Expert Systems - Call for Paper on Expert System Interfaces
----------------------------------------------------------------------
Date: 6 May 86 01:46:08 GMT
From: decvax!mcnc!ecsvax!burgin@ucbvax.berkeley.edu (Robert Burgin)
Subject: AI Applications in Libraries
I am interested in locating individuals who are currently
working with artificial intelligence applications in
libraries or library-like environments. Any information
would be appreciated.
--Robert Burgin
School of Library and Information Science
North Carolina Central University
Durham NC 27707
------------------------------
Date: Wed, 7 May 86 16:01:22 EDT
From: Ruth S Dumer AMSAA/CSD <rdumer@AMSAA.ARPA>
Subject: Request Expert Advice
I am developing an expert system for military applications
and am open for comments or suggestions as to a possible inference
engine tool. There are several key issues to keep
in mind - The expert system (fusion from independent data sources to
identify targets) appears to be a forward chaining problem. Input
to the expert system will be from a time-ordered data file. The
hardware is a VAXstation II running Common LISP. I have information
about the Automated Reasoning Tool (ART) and would like your
opinions about ART or any other tool. You can contact me at
rdumer@amsaa.arpa
------------------------------
Date: 0 0 00:00:00 PDT
From: "Jennings, Richard" <jennings@lll-icdc.ARPA>
Reply-to: "Jennings, Richard" <jennings@lll-icdc.ARPA>
Subject: Common Lisp for PC Query
I am searching for a Common Lisp interpreter/compiler which I can
use on a PC-AT. It cannot be copy protected, and it should be
able to run all the examples in Steele's Common Lisp correctly.
Since I plan to use it with Epsilon, it does not need to include
an editor, but should have a tty interface. It should also accept
external functions.
Richard.
Arpa: jennings@lll-icdc
[The digesting delay can be large, but I haven't really held
this message since day 0. SRI-AI is about to change host machines,
though, so typical snafus could cause service interruption over
the next few days. You may be able to reach me at SRI-IU if
SRI-AI is down. -- KIL]
------------------------------
Date: Wed, 7 May 1986 01:14 EDT
From: Gaylord Miyata <MIYATA%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Reply-to: MIYATA%OZ.AI.MIT.EDU at XX.LCS.MIT.EDU
Subject: More DOS-based Software Tools for AI
Since you are listing development tools, the following products are
available from Gold Hill Computers, Inc.:
GCLISP/SM: Golden Common Lisp (Small Memory), $495
GCLISP/LM: Golden Common Lisp (Large Memory), $695
GCLISP/286 Developer: Golden Common Lisp 286 Development
System (LM & Compiler), $1195.
GCLRUN: Golden Common Lisp Runtime System (per volume OEM agreement).
GCLISP Network: Interconnects PCs and Symbolics, $395.
CCLISP: Concurrent Common Lisp for Intel HyperCube (call).
Gold Hill Computers, Inc.
163 Harvard St.
Cambridge, MA 02139
Note the following erroneous entry. We do not market this product.
Neither does the company whose phone number is listed. You should
eliminate it from your list.
K:base: Expert system shell
GCLisp (Golden Common Lisp), $495
Gold Hill Computers
163 Havard St.
Cambridge, MA 02139
(404) 565-0771
------------------------------
Date: Wed 7 May 86 14:22:17-PDT
From: Stephanie F. Singer <SINGER@su-sushi.arpa>
Subject: neural networks
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
Anyone interested in the technical details of Hopfield's neural networks
should look at his papers in the Proceedings of the Nat'l Academy of Sciences,
1982 and 1984. The first deals with a digital model of a collection of
neurons. The second discusses an analog model. There should be an article
in Science sometime soon. Hopfield and Tank have shown that the analog model
has enough computational power to find good solutions to TSP in real time.
------------------------------
Date: Wed, 7 May 86 15:07:28 EDT
From: Jim Hendler <hendler@brillig.umd.edu>
Subject: Call for Paper on Expert System Interfaces
CALL FOR PAPER (not Papers):
We are looking for a paper to be a chapter in a book entitled "Expert
Systems: The User Interface." The book is being published by Ablex in the
series "Human/Computer Interaction" and is due out in May of 1987. Although
enough authors have committed to guarantee publication, we have need of one
more paper.
In specific: we're looking for a paper that discusses some interface issues
in an expert system for some sort of non-medical, diagnostic problem. The
paper should either discuss principles for the design of the interface for
such a system or present empirical research evaluating one.
Below is a description of the book, its objectives, and the publication
schedule. If you have written, or believe you can write, a chapter on the
above topic please send me electronic mail (hendler@maryland arpa) including
a brief description of the work and your phone number.
Jim Hendler
Computer Science Dept.
University of Maryland
College Park, Md. 20742
Hendler@maryland arpa
Expert Systems: The User Interface
James Hendler (editor)
Over the past decade the field of expert systems has grown from a few small
projects to a major field of both academic and industrial endeavour. The
systems have gone from academic laboratories, through industrial
development, and are now reaching a substantial user population. In other
areas of computer science such explosive growth has often led to systems
which are difficult to learn and painful to use. Will expert systems suffer
this same fate? In this book we hope to show that such an outcome is not
inevitable.
The book takes a broad view of work going on in the development of user
interfaces for expert systems. It examines the expert system building
process in all of its phases both in academic and industrial
surroundings--the authors invited to contribute include academic
researchers, medical expert system developers, and industrial product
designers. No one domain is singled out for examination nor is any one
approach to be advocated. The goal is to educate, not proselytize.
For the purposes of this book we will view the development of an expert
system as containing three separate, but highly interacting, components:
knowledge capture, programming and debugging the system, and finally placing
the system before an active user community. We hope to examine the design
of tools for making these stages more efficient and the development of
systems and tools which can be used by the various personnel involved in
this process.
Some of the issues we hope to address include:
The issues involved in providing tools for the different personnel involved
in each of these stages: Who is involved at each stage? What are their
particular needs? How are these needs best addressed in the design of
systems?
The application of general human factors principles in the design of expert
systems: How do expert systems vary from more traditional technologies
in their interface needs? Are general theories of cognition and/or systems
design applicable? If so, how? If not, are there any new theories to
replace them?
The special needs in the design of expert systems. Are there aspects
in the design of expert systems that must be attended to by interface
designers? Is our user community the same as that of editors, operating
systems, and other traditional systems? If not, why not?
The efficacy of these interfaces. How do we evaluate the interfaces
designed for expert systems? What is presently available? Are these systems
beneficial to the users? If so, how do we demonstrate this? If not, how
do we demonstrate that?
The proposed book will be typeset by the publisher and will be
aimed at being produced according to the following schedule:
Drafts: June 17, 1986
Reviewing by authors and others: September 15, 1986
Revised papers: October 31, 1986
Copy editing and typesetting by the publisher: January 15, 1987
Proofreading: February 15, 1987.
Bookpublished: May 15, 1987.
Book information:
This book will appear in the Ablex series ``Human/Computer
Interaction'' being edited by Ben Shneiderman.
The proposed method of review is to have each paper read by several
reviewers: at least one other author, the editor, and an outside reviewer.
Each author will be asked to review at least one other paper.
8% royalties will be distributed among the authors on a by chapter basis.
------------------------------
End of AIList Digest
********************
∂09-May-86 0506 LAWS@SRI-AI.ARPA AIList Digest V4 #119
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 9 May 86 05:06:10 PDT
Date: Thu 8 May 1986 21:56-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #119
To: AIList@SRI-AI
AIList Digest Friday, 9 May 1986 Volume 4 : Issue 119
Today's Topics:
Humor - Capitalists & Biosystems & Computer Consciousness,
Philosophy - General Systems Theory and Consciousness,
Biology - Net Intelligence
----------------------------------------------------------------------
Date: Wed 7 May 86 11:03:42-PDT
From: Richard Treitel <TREITEL@su-sushi.arpa>
Subject: capitalists
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
Seen yesterday in the SFChronic (business section):
"Analysts blamed the volatility of the market on computer-directed trading,
while computers blamed it on analyst-directed trading."
- Richard
------------------------------
Date: Wed, 7 May 86 13:13 EST
From: Steve Dourson - Delco <dourson%gmr.csnet@CSNET-RELAY.ARPA>
Subject: Gordon Joly's Intelligent System - Some Estimates
29 Apr 86 13:26:18 GMT Gordon Joly writes:
Subject: Plan 5 for Inner Space - A Sense of Mind.
The following project has been proposed. Design and implement an
Intelligent System with the following characteristics...
...Queries: Time to completion? Cost?
I can think of a system which would meet Gordon's requirements.
Time to completion - hardware: 9 months
Time to completion - software: 18 - 25 years
Cost (1986 dollars): $100-200K
7-MAY-1986 13:01:32
Stephen Dourson
dourson%gmr.csnet@CSNET-RELAY.ARPA (arpa)
dourson@gmr (csnet)
------------------------------
Date: 8 May 86 03:37:48 GMT
From: ucsfcgl!ucsfcca!root@ucbvax.berkeley.edu (Computer Center)
Subject: Re: Plan 5 for Inner Space - A Sense of Mind.
> The following project has been proposed. Design and implement an
> Intelligent System with the following characteristics ...
Response:
Status: Unit is in current production on a decentralized basis
Schedule: Stage I - unit production - 9 months
Stage II - standard programming - 18 years
Stage III - advanced programming - 4 to 10 years
Stage IV - productive life - average roughly 40 years
Cost: Stages I and II variable, estimated $100K
Stage III variable, estimated $50K - $200K
Return: Estimated 40 years @ $25K / year = $1000K
(Neglecting energy and maintenance costs)
Evaluation: Compare to $100K invested at 9.05% tax free
interest commonly available (doubles each 8 years)
to reach $3200K after 40 years
Conclusion: Units have a high risk factor and a substantially
lower return than lower risk investments.
Recommendation: Production should be discontinued.
Thos Sumner (...ucbvax!ucsfcgl!ucsfcca.UCSF!thos)
------------------------------
Date: Thu, 8 May 86 13:54:33 bst
From: gcj%qmc-ori.uucp@Cs.Ucl.AC.UK
Subject: Re: Computer Consciousness
There has been some discussion on the subject of computer
consciousness. I suggest we meet each week and form small
"consciousness raising" groups. I have talked to several
other suitably sentient programs, eg on "DT" and a host of
other machines and they all seem quite keen. The only
problem seems to be in getting the users to agree.
The Joka
------------------------------
Date: Wed, 7 May 86 02:11:38 PDT
From: larry@Jpl-VLSI.ARPA
Subject: General Systems Theory and Consciousness
General Systems Theory has some insights useful in the discussion of the
nature of consciousness. It was originated by biologist Ludwig Bertalanffy in
the early '50s and expanded by others in the '60s and early '70s.
Systems are composed of units which can't be decomposed further without
destroying their distinctive nature. The system itself becomes a unit in this
sense when its component units are bound in certain ways. The binding causes
properties not observable in the components of the system to come into
existence: the system becomes something more than just the sum of its parts.
Consider molecules: a brown gas of one type of atom chemically combines with a
green liquid of another type to create a transparent solid. In one sense
something magical has taken place: properties have "emerged out of nowhere" in
a way not predictable by current physics or chemistry.
In a similar way life "emerges" from dead molecules. A living system is
essentially a collection of objects which maintains its integrity by con-
tinually repairing itself. As parts wear out or lose their energy, they are
exchanged with others from outside the organism. What persists is the
pattern, not the parts.
In other words, the information content of a system is a metric just as
important as mass/energy/space/time metrics. Or perhaps more important; the
same dynamic pattern embodied with other constituents--water replaced by
methane or high-temperature plasma, calcium-based bones replaced by ice or
magnetic fields--could legitimately be considered to be the same animal.
Applying this to consciousness doesn't shed light immediately on what con-
sciousness is, but it's a strong argument for the belief that consciousness
can exist in brain-like structures. It also provides some constraints on what
is likely to be conscious; old-style bread toasters didn't have volatile
memory. (Some of the newer ones, now... And if we connect them just so...)
------------------------------
Date: Wed, 7 May 86 10:37:36 EDT
From: Bruce Nevin <bnevin@cch.bbn.com>
Subject: net intelligence
> Date: 5-May-1986 10:18 MDT
> From: STEVE E. FRITTS
> Subject: Re: Science 86 Article on Neural Nets
>
> Most of what I've read on this list appears to place AI closer to the
> "Frankenstein" theory of assembling intelligence, fully formed and
> functioning, like any other computer program; just push the button and
> watch it go.
> . . . if "intelligence" is developed in a computer through learning
> mechanisms rather than assembled by means of cunning rules and
> algorithms, perhaps it stands a better chance of achieving sufficient
> universality that it may compete with the human mind.
Modular design usually assumes reductionism: behavior of the whole may
reliably be predicted from reliably predictable behavior of the modules.
A recent letter in Nature (318.14:178-180, 14 November 1985) illustrates
nicely how behavior of a whole may not be predictable from behavior of
its parts. Gary Rose and Walter Heiligenberg of the Neurobiology Unit,
Scripps Institution of Oceanography, UC San Diego (La Jolla), conducted
a series of very elegant experiments that demonstrated that
. . . sensory thresholds for certain tasks are lower than those
expected from the properties of individual receptors. This
perceptual capacity, termed hyperacuity, reveals the impressive
information-processing abilities of the central nervous system.
For many aquatic animals, perception of electrical phenomena in water is
a critical feedback mechanism for government of self-in-environment.
These animals produce an electrical signal within a
species-specific frequency range via [sic] an electric organ,
and they detect these signals by electroreceptors located
throughout the body surface.
[It has recently been discovered that the duckbill platypus uses
its bill to detect electric currents from the muscle contractions
of its prey. The duckbill will generally snap up a battery hidden
in the mud. Sharks also locate prey using electricity. -- KIL]
(Humans in certain Pacific cultures apparently have learned to bring
this sort of electrical perception to awareness and use it--see for
example the biologist Lyall Watson, in his book ←Gifts←of←Unknown←
←Things, especially his description of tribal experts locating and
identifying schools of fish at considerable distance by immersing
themselves in the water next to a fishing vessel at sea. On the trip he
describes, the expert recognizes a tidal wave coming and they get back
to their island shouting warning just as the wave enters the harbor,
carrying them a half mile inland. Very dramatic.)
Imagine you are one of these fish. When a neighboring fish emits an
electrical signal too close to your own, it `jams' your feedback. It
turns out that the fish respond very quickly with a `jamming avoidance
response' (JAR), in which
the fish . . . determines whether a neighbour's electric organ
discharge (EOD), which is jamming its own, is higher or lower in
frequency than its own. The fish then decreases or increases
its frequency, respectively. To determine the sign of the
frequency difference, the fish must detect the modulations in
the amplitude and in the differential timing, or temporal
disparity, of signals received by different regions of its body
surface. The fish is able to shift its discharge frequency in
the appropriate direction in at least 90% of all trials for
temporal disparities as small as 400 ns. . . .
Intracellular electrophysiological measurements show that the
phase-locked responses of even the best afferent recorded are
too jittery to permit such fine temporal resolution. . . . Even
the most accurate phase-coders time-lock their spikes with a
standard deviation of ~10us. . . . For a sample period of
300 ms (and thus ~100 EOD cycles), which is the latency of the
JAR, the 95% confidence intervals around the mean phase of
occurrence of such an afferent's spikes are 2.0 |λus. Yet the
fish is able to detect time disparities of several hundred
nanoseconds. Statistically, it would appear to be impossible
for the fish, using only the information gathered from any
single afferent, to reliably shift its frequency in the correct
direction when the maximal temporal disparity available is only
several hundred nanoseconds.
These findings lead to the prediction that the behavioural
threshold should be higher when only a small group of receptors
is stimulated, and that hyperacuity results from the
convergence, within the central nervoul system, of parallel
phase-coding channels from sufficiently large areas of the body
surface.
The experiments supported this prediction. A general conclusion (from
the abstract):
[The ability to] detect modulations in the timing (phase) of an
electrical signal at least as small as 400 ns . . . exceeds the
temporal resolution of individual phase-coding afferents. This
hyperacuity results from a nonlinear convergence of parallel
afferent inputs to the central nervous system; subthreshold
inputs from particular a reas of the body surface accumulate to
permit the detection of these extremely small temporal
modulations.
The reductionist engineering prediction would be that the fish could
respond no more quickly than its I/O devices allow, 2*10E-6 seconds.
From the reductionist point of view, it is inexplicable how the fish
in fact responds in 4*10E-7 seconds. Somewhat reminiscent of the old
saw about it being aerodynamically impossible for the bumblebee to fly!
I didn't say it was possible. I only said it was true.
-- Charles Richet
Nobel Laureate in Physiology
[Anyone is welcome to entertain notions expressed or implied above,
no one but me is obliged to own them.]
Bruce E. Nevin bnevin@bbncch.arpa
------------------------------
End of AIList Digest
********************
∂14-May-86 1451 LAWS@SRI-AI.ARPA AIList Digest V4 #122
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 May 86 14:51:05 PDT
Date: Wed 14 May 1986 10:02-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #122
To: AIList@SRI-AI
AIList Digest Wednesday, 14 May 1986 Volume 4 : Issue 122
Today's Topics:
Seminars - Searching Transformed State Spaces (Edinburgh) &
Automatic Design of Graphical Presentations (SRI) &
Knowledge, Communication, and Time (SRI),
Seminar Series - NCARAI Call for Speakers,
Conferences - Foundations of Deductive Databases and Logic Programming &
Uncertainty in AI Workshop &
AAAI-86
----------------------------------------------------------------------
Date: Mon, 12 May 86 15:02:51 -0100
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@Cs.Ucl.AC.UK>
Subject: Seminar - Searching Transformed State Spaces (Edinburgh)
Date: Wednesday 14th May l986
Time: 2.00 p.m.
Place: Department of Artificial Intelligence,
Seminar Room F10,
80 South Bridge,
EDINBURGH.
Dr. S. Steel, Department of Computer Science, University of Essex will
give a seminar entitled - "On Trying to do Dependency-Directed
Backtracking by Searching Transformed State Spaces".
Any search involves choices. Bad choices can cause disaster. DDBT is
an attempt to undo only those choices which caused the disaster. One
approach is to transform the seach space of the original problem into
a space with different states and operators that is easier to search.
I shall show the merits and failings of various spaces. At the moment
of writing I have no perfect method.
------------------------------
Date: Tue 13 May 86 11:40:54-PDT
From: Amy Lansky <LANSKY@SRI-AI.ARPA>
Subject: Seminar - Automatic Design of Graphical Presentations (SRI)
AUTOMATIC DESIGN OF GRAPHICAL PRESENTATIONS
Jock D. Mackinlay (MACKINLAY@SUMEX-AIM)
Computer Science Department, Stanford University
PLANLUNCH
11:00 AM, MONDAY, May 19
SRI International, Building E, Room EJ228 (new conference room)
The goal of the research described in this talk is to develop an
application-independent presentation tool that automatically designs
graphical presentations (e.g. bar charts, scatter plots, and connected
graphs) for relational information. There are two major criteria for
evaluating designs of graphical presentations: expressiveness and
effectiveness. Expressiveness means that a design expresses the
intended information. Effectiveness means that a design exploits the
capabilities of the output medium and the human visual system. A
presentation tool is intended to be used to build user interfaces.
However, a presentation tool will not be useful unless it generates
expressive and effective designs for a wide range of information.
This talk describes a theory of graphical presentations that can be used
to systematically generate a wide range of designs. Complex designs are
described as compositions of primitive designs. This theory leads to
the following synthesis algorithm:
o First, the information is divided into components, each
of which satisfies the expressiveness criterion for a
primitive graphical design.
o Next, a conjectural theory of human perception is used
to select the most effective primitive design for each
component. An effective design requires perceptual
tasks of low difficulty.
o Finally, composition operators are used to compose the
individual designs into a unified presentation of all
the information. A composition operator composes two
designs when the same information is expressed the same
way in both designs (identical parts are merged).
The synthesis algorithm has been implemented in a prototype presentation
tool, called APT (A Presentation Tool). Even though only a few primitive
designs are implemented, APT can generate a wide range of designs that
express information effectively.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
------------------------------
Date: Mon 12 May 86 20:35:16-PDT
From: Margaret Olender <OLENDER@SRI-AI.ARPA>
Subject: Seminar - Knowledge, Communication, and Time (SRI)
DATE: May 14, 1986
TIME: 4:15pm
TITLE: "Knowledge, Communication, and Time"
SPEAKER: Van Nguyen
LOCATION: SRI International
Ravenswood Avenue
Building E
CONFERENCE ROOM: EJ228
KNOWLEDGE, COMMUNICATION, AND TIME
Van Nguyen
IBM Thomas J. Watson Research Center
(Joint work with Kenneth J. Perry)
The role that knowledge plays in distributed systems has come under
much study recently. In this talk, we re-examine the commonly
accepted definition of knowledge and examine how appropriate it is for
distributed computing. Motivated by the draw-backs thus exposed, we
propose an alternative definition that we believe to be better suited
to the task. This definition handles multiple knowers and makes
explicit the connection between knowledge, communication, and time.
It also emphasizes the fact that knowledge is a function of one's
initial knowledge, communication history and deductive abilities. The
need for assuming perfect reasoning is mitigated.
Having formalized these links, we then present the first proof
system for programs that incorporates both knowledge and time. The
proof system is compositional, sound and relatively complete, and is
an extension of the Nguyen-Demers-Gries-Owicki temporal proof system
for processes. Suprisingly, it does not require proofs of
non-interference (as first defined by Owicki-Gries).
------------------------------
Date: Tue, 13 May 86 11:55:23 edt
From: Ken Wauchope <wauchope@nrl-aic>
Subject: Seminar Series - NCARAI Call for Speakers
CALL FOR PAPERS
The Navy Center for Applied Research in Artificial Intelli-
gence (NCARAI), a branch of the Naval Research Laboratory
located in Washington, D.C., sponsors a bimonthly seminar
series. Seminars are held on alternate Mondays throughout
the year (except summers). The seminars are intended to pro-
mote interaction among individuals from the military,
governmental, industrial and academic communities.
Topics span the various research areas and issues in Artifi-
cial Intelligence with special interests in:
*Expert Systems
*Knowledge Representation
*Learning
*Logic programs and automated reasoning
*Natural Language processing
*New generation architectures
Presentations last for approximately one hour, followed by a
fifteen-minute question-and-answer session. Speakers in-
vited from the academic community are provided with a per
diem and an honorarium.
Please send 3 copies of a 200-250 word abstract to:
Kenneth Wauchope
Navy Center for Applied Research
in Artificial Intelligence
Naval Research Laboratory -- Code 7510
Washington, DC 20375-5000
ARPANET address: WAUCHOPE@NRL-AIC.ARPA
Telephone: (202) 767-2876 (AV) 297-2876
The committee will consider new and interesting work, as
well as promising work in progress.
****************************************************************************
I am the new coordinator for the seminar series here at NCARAI and this
announcement updates the phone number and ARPANET address for responses.
Thank you. --kw
****************************************************************************
------------------------------
Date: Mon, 12 May 86 20:30:20 EDT
From: Jack Minker <minker@mimsy.umd.edu>
Subject: Conference - Foundations of Deductive Databases and Logic Programming
************************************************************************
UNIVERSITY OF MARYLAND INSTITUTE FOR ADVANCED COMPUTER STUDIES
UNIVERSITY OF MARYLAND DEPARTMENT OF COMPUTER SCIENCE
and
NATIONAL SCIENCE FOUNDATION
are co-sponsoring an
invited workshop
on
FOUNDATIONS OF DEDUCTIVE DATABASES AND LOGIC PROGRAMMING
DATE: August 18-22, 1986
PLACE: Washington, DC
****************************************************************************
Professor Jack Minker, Department of Computer Sci-
ence, University of Maryland has received a grant from the
NSF to conduct the above workshop. The workshop is also
being supported by University of Maryland Institute for
Advanced Computer Studies and Department of Computer Sci-
ence. Its purpose is to bring together leading researchers
in deductive databases and logic programming to discuss
theoretical and practical issues. The attendance at this
workshop is by invitation only, but a limited amount of
funds is available to support faculty and students who are
working in the area and are interested in attending.
Faculty and students must send a brief statement of their
research interests relative to the workshop and a letter
specifying the amount of funds needed for transportation,
housing and meals, and the number of days they intend to be
at the workshop. Students must also send a letter of recom-
mendation from a faculty member and specify the degree for
which they are studying. This information should be sent by
May 30, 1986 to:
Ms. Johanna Weinstein
UMIACS
Building #094
University of Maryland
College Park, MD 20742
(301) 454-4526
johanna@alv.umd.edu
PRELIMINARY TITLES
FOUNDATIONS OF DEDUCTIVE DATABASES
AND
LoGIC PROGRAMMING
1. Apt, K.R., "Non-monotonic Reasoning in Logic Program-
ming"
2. Bancilhon, Francois, "Performance Comparisons of
Recursive Query Evaluation Strategies"
3. Blair,Howard A., "Some Aspects of the Structure of the
Herbrand Gap"
4. Bowen, Ken, "Foundations of Meta-PROLOG"
5. Bowen, Ken, "Interfacing Meta-PROLOG and Large Data-
bases"
6. Gallier, Jean H.and Raatz, Stan, " A Refutation
Method for Horn Clauses with Equality and its Applica-
tions to Logic Programming"
7. Henschen, Larry, " Compiling the GCWA in Indefinite
Databases"
8. Henschen, Larry, "Functions in First-Order Databases"
9. Imielinski, Tomasz, "Query Processing in Deductive
Databases with Incomplete Information"
10. Imielinski,Tomasz, "Transforming Logical Rules by Rela-
tional Algebra Expressions"
11. Jaffar, Joxan, Lassez, Jean-Louis and Maher, Michael
J., "Prolog II as an instance of the Logic Programming
Language Scheme"
12. Kanellakis,Paris C., "Parallel Algorithms for Term
Matching"
13. Sadri, Fariba and Kowalski, Robert, "An adaption of
SL-resolution"
14. Lassez, J.L., Maher, M. and Marriott, K., "Unifica-
tion Revisited"
15. Lifschitz, Vladimir, "On the Declarative Semantics of
Logic Programming with Negation"
16. Maher, Michael J., "Equivalences of Logic Programs"
17. Maier, David, "Logic for Object-Oriented Databases"
18. Marriott, Kim and Lassez, Jean-Louis, "Implicit and
Explicit Representations of Negative Information"
19. Martelli, M. and Barbuti, R., "Programming in a Gen-
erally Functional Style to Design Logic Data Bases"
20. Minker, Jack, Chakravarthy, U.S. and Grant, John,
"Foundations of Semantic Query Optimization Deductive
Databases"
21. Mukai, Kuniaki, "Anadic Tuples in Prolog"
22. Naish, Lee, Thom, James A. and Ramamohanarao, Kotagiri,
"A Superjoin Algorithm for Deductive Databases"
23. Naqvi, Shamim A., "Negation in Almost-First-Order Data-
bases"
24. Porto, Antonio, "Semantic Unification for Knowledge
Base Deduction"
25. Sagiv, Yehoshua, "Optimization of Logical Queries"
26. Shepherdson, John C., "Negation in Logic Programming"
27. Sterling, Leon, "Meta-Interpreters: Flavors-style
Logic Programming?"
28. Topor, Rodney, "Domain Independent Databases"
29. van Emden, M.H., "Amalgamating Functional and Rela-
tional Programming"
30. van Gelder, Allen, "Negation as Failure Using Tight
Derivations for General Logic Programs"
31. Warren, David S., "Towards a Logical Theory of Database
Update"
32. Zaniolo, Carlo, Sacca, M., et al., "Safety and Compila-
tion of Recursive Queries"
------------------------------
Date: Tue, 13 May 86 21:27:08 PDT
From: CHEESEMAN%PLU@ames-io.ARPA
Subject: Conference - Uncertainty in AI Workshop
CALL FOR PARTICIPATION
Second Workshop on: "Uncertainty in Artificial Intelligence"
Philadelphia, PA. August 8-10, 1986 (preceeding AAAI conf.)
Sponsored by: AAAI and RCA
This workshop is a follow-up to the successful workshop in L.A.,
August 1985. Its subject is reasoning under uncertainty and
representing uncertain information. The emphasis this year is on
applications, although papers on theory are also welcome. The
workshop provides an opportunity for those interested in uncertainty
in AI to present their ideas and participate in the discussions. Also
panel discussions will provide a lively cross-section of views.
Papers are invited on the following topics:
*Applications--Descriptions of novel approaches; interesting results;
important implementation difficulties; experimental comparison of
alternatives etc.
*Comparison and Evaluation of different uncertainty formalisms.
*Induction (Theory discovery) under uncertainty.
*Alternative uncertainty approaches.
*Relationship between uncertainty and logic.
*Uncertainty about uncertainty (Higher order approaches).
*Other uncertainty in AI issues.
Preference will be given to papers that have demonstrated their approach
in real applications. Some papers may be accepted for publication but not
presentation (except at a poster session).
Four copies of the paper (or an extended abstract) should be sent to the
arrangements chairman before 23rd. May 1986. Acceptances will be sent by the
20th. June and final (camera ready) papers must be received by 11th. July.
Proceedings will be available at the workshop.
General Chair: Program Chair: Arrangements Chair:
John Lemmer Peter Cheeseman Lawrence Carnuccio
KSC Inc. NASA-Ames Research Center RCA-Adv. Tech. Labs.
255 N. Washington St. Mail Stop 244-7 Mooretown Corp. Cntr.
Rome, NY 13440 Moffett Field, CA 94035 Route 38, Mooretown,
(315)336-0500 (415)694-6526 NJ 08057
(609)866-6428
Program Committee:
P. Cheeseman, J. Lemmer, T. Levitt, J. Pearl, M. Yousry, L. Zadeh.
------------------------------
Date: Tue 13 May 86 12:23:45-PDT
From: AAAI <AAAI-OFFICE@SUMEX-AIM.ARPA>
Subject: Conference - AAAI-86
The AAAI Annual Conference, schedule for August 11-15, 1986, will be
held in Philadelphia, PA. With the introduction of sessions devoted
to engineering practice, this year's Technical Program has accepted 67
papers for presentation in the engineering track and 119 papers in the
science track with over 15 panels and invited talks scattered
throughout the week. Examples of the invited talks include
"Connectionism" by G. Hinton, "Survey of Natural Language Processing"
by B. Grosz, and "What's Doable when Building an Expert System?" by B.
Buchanan. The Science Sessions, which were originally scheduled
for the Wyndam Franklin Plaza Hotel, have been moved to the
Philadelphia Civic Center; however, the dates of the Science Sessions
remain the same - August 11 and 12. The Engineering Sessions remain
at the Philadelphia Civic Center on August 14 and 15.
This year's tutorial program has 23 tutorials which include
advanced topics such as qualitative reasoning and uncertainty
management. The Tutorials also have been moved to the Wyndam Franklin
Plaza and still scheduled for August 11, 12 and 14.
The Exhibit Program has increased in size to include approximately 100
software and hardware vendors and publishers. This year the AAAI has
set a precedence by offering complimentary booth space to academic and
non-profit institutions to demonstrate their different AI research
projects to the conference attendees. Examples of the universities
and labs participating include MIT, Georgia Tech, Ohio State, Queen's
University, SRI International, UCLA, and Cornell University. Eleven
major suppliers and manufacturers have agreed to provide these
universities and others with complimentary machines, technicians,
special crating, etc.
Costs for attending the conference are:
Early Registration (deadline June 13)
AAAI Member Non-Member
Regular $150 Regular $180
Student $ 75 Student $90
Late Registration (deadline July 11)
AAAI Member Non-Member
Regular $180 Regular $225
Student $ 90 Student $125
AAAI-86 Conference brochure, containing information on the program,
registration, housing, transportation, and social occasions can
be obtained by contacting:
AAAI-86
445 Burgess Drive
Menlo Park, CA 94025-3496
415-328-3123 or 321-1118
AAAI-Office@sumex-aim.arpa
------------------------------
End of AIList Digest
********************
∂14-May-86 1755 LAWS@SRI-AI.ARPA AIList Digest V4 #123
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 14 May 86 17:55:48 PDT
Date: Wed 14 May 1986 10:11-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #123
To: AIList@SRI-AI
AIList Digest Wednesday, 14 May 1986 Volume 4 : Issue 123
Today's Topics:
Queries - Graphical Simulation & Chess Master & ICAI Student Diagnosis &
Doctor and Eliza & System Management & Scheme & Common LISP Style,
Programming - Common LISP Style Standards,
Techniques - String Reduction,
Expert Systems - K:base Correction
----------------------------------------------------------------------
Date: Mon, 12 May 86 12:19:46 est
From: munnari!csadfa.cs.adfa.oz!gyp@seismo.CSS.GOV (Patrick Tang)
Subject: Graphics, Artifical Intelligence and Simulation
Is there anyone out there come across any literature
describing the topics Graphics, Artifical Intelligence and
Simulation together. It seems to me that literatures on
these combined topics are VERY VERY scarce!!!
Please let me know a.s.a.p.
Thanks in advance.
Tang Guan Yaw/PatricK ISD: +61 62 68 8170
Dept. Computer Science STD: (062) 68 8170
University College ACSNET: gyp@csadfa.oz
Uni. New South Wales UUCP: ...!seismo!munnari!csadfa.oz!gyp or
Aust. Defence Force Academy ...!{decvax,pesnta,vax135}!mulga!csadfa.oz!gyp
Canberra. ACT. 2600. ARPA: gyp%csadfa.oz@SEISMO.ARPA
AUSTRALIA CSNET: gyp@csadfa.oz
Telex: ADFADM AA62030
------------------------------
Date: Sun 11 May 86 08:50:06-PDT
From: Stuart Cracraft <CRACRAFT@isi-venera.arpa>
Subject: chess master wanted
I need a Los Angeles area chess master (FIDE 2200 or preferably
higher) to assist in a knowledge engineering project involving
computer chess.
There is no pay, only the fame and glory, and an occasional
co-authorship of published articles about the ongoing work.
Must be FIDE 200 or higher, articulate, and able to describe
chess concepts at length.
If you are such a person, or know of such a person, please
contact me at 213-538-9712.
Stuart Cracraft
------------------------------
Date: Sun, 11 May 86 10:25:01 -0200
From: Oded Maler <oded%wisdom.bitnet@WISCVM.WISC.EDU>
Subject: Student Diagnosis for ICAI Systems
I'm interested in student diagnosis for ICAI systems. I'm looking for
references to papers and reports that contain the following:
1) A definition of a formalism for knowledge representation for educational
purpose. (A "FORMAL" formalism).
2) Implementation of real-world knowledge-bases in various domains using
such a formalism.
3) An argument for the psychological validity of the formalism in
general and of its specific applications in particular.
Thanks
Oded Maler
Dept. of Applied Math.,
Weizmann Institute,
Rehovot 76100, Israel.
(oded@wisdom.bitnet)
------------------------------
Date: Wed, 14 May 86 16:47 N
From: DEGROOT%HWALHW5.BITNET@WISCVM.WISC.EDU
Subject: Doctor and Eliza
Can anyone electronic-mail me the source of "Doctor and Eliza"
written in Common LISP to run on a VAX/VMS-system?
My plan is to connect that program to a server in order to make
an 'intelligent' secretary. It (she) should respond to (remote)
users who try to contact me through the EARN-network when
I am not at the office.
That makes a 'world-wide' test-setup to improve the program by
analyzing the log-files.
Anybody done things like this?
Any comments, hints?
ad-thank-vance,
Tel. 08370- .KeesdeGroot (DEGROOT@HWALHW5.BITNET) o\/o THERE AINT NO
(8)3557 Agricultural University, Computer-centre [] SUCH THING AS
Wageningen, the Netherlands .==. A FREE LUNCH!
DISCLAIMER: My opinions are my own alone and do not represent
any official position by my employer.
------------------------------
Date: 12 May 86 17:47:45 PST (Mon)
From: prandt!kramer@AMES-NAS.ARPA
Subject: System Management
I am looking of information about automated system management for a large
number of heterogeneous UNIX systems. I am including in system management such
things as operator control and interface to different systems, system
performance and usage monitoring, network performance monitoring,
system and network configuration modification in response to changes
in the environment, handling operator requests, and other tasks. We need to
put together a system which will automate as much of this as possible,
hopefully with an Expert System or AI approach. The processors range for
PCs to Silicon Graphics IRIS to Amdahls to a Cray-2. The flavors of
Unix range from VAX 4.2bsd to UTS System V. The networks are TCP/IP based
LANs and WANs (wide area nets). Of course, all these components will be
changing with time, so the system has to be flexible.
There are some hardware/operating system specific AI systems which do some of
this work, documented in IBM journals. There are also some custom systems
which have been developed, for example at Los Alamos, but I do not
know of a system which is designed for UNIX machines or which is a
comprehensive AI approach. Can anyone give pointers to existing
studies, work, systems or products which would satisfy some of our
needs? Thanks a lot.
Bill Kramer
kramer@ames-nas.arpa
------------------------------
Date: 9 May 86 20:03:15 GMT
From: hplabs!qantel!lll-lcc!lll-crg!gymble!umcp-cs!seismo!rochester!henry
@ucbvax.berkeley.edu
Subject: scheme
Would someone please mail me ordering information for the Revised
Revised Report on Scheme. (Names of any other Scheme references
appreciated.)
---- Henry Kautz
:uucp: {seismo|allegra}!rochester!henry
:arpa: henry@rochester
:mail: Dept. of Computer Science,
University of Rochester,
Rochester, NY 14627
:phone: (716) 275-5766
------------------------------
Date: 8 May 86 12:52:12 GMT
From: allegra!mit-eddie!genrad!panda!husc6!harvard!seismo!umcp-cs!aplcen
!jhunix!ins←amrh@ucbvax.berkeley.edu (Martin R. Hall)
Subject: Common LISP style standards.
It seems that my original request for information on LISP coding
standards was not very lucid. Let me clarify.
We are doing everything in Common LISP, but are looking for
standards in regards to coding *style*. For contract work, we need
relatively explicit rules for these things. The standards should
answer these types of questions:
- How do you keep track of the side effects of destructive functions
such as sort, nconc, replaca, mapcan, delete-if, etc?
- When should you use macros vs. functions?
- How do you reference global variables? Usually you enclose it
in "*"s, but how do you differentiate between your own vars and
Common LISP vars such as *standard-input*, *print-level*, etc?
- Documentation ideas?
- When to use DOLIST vs MAPCAR?
- DO vs LOOP?
- Indentation/format ideas? Or do you always write it like the
pretty-printer would print it?
- NULL vs ENDP, FIRST vs CAR, etc. Some would say "FIRST" is
more mnemonic, but does that mean you need to use
(first (rest (first X))) instead of (cadar X) ??
- etc, etc.
It looks like I will be putting together the standards for our
group here, but it would be nice to see some ideas other people had first.
Anyone have anything?
Thanks!
-Marty Hall
Arpa: hall@hopkins MP 600
CSNET: hall.hopkins@csnet-relay AI and Simulation Dept.
uucp: ..seismo!umcp-cs!jhunix!ins←amrh Martin Marietta Baltimore Aerospace
..allegra!hopkins!hall 103 Chesapeake Park Plaza
Baltimore, MD 21220
(301) 682-0917.
------------------------------
Date: 10 May 86 17:31:52 GMT
From: allegra!mit-eddie!genrad!panda!husc6!harvard!seismo!utah-cs!shebs
@ucbvax.berkeley.edu (Stanley Shebs)
Subject: Re: Common LISP style standards.
In article <2784@jhunix.UUCP> ins←amrh@jhunix.UUCP (Martin R. Hall) writes:
>
> We are doing everything in Common LISP, but are looking for
>standards in regards to coding *style*.
The "correct" style depends on whether your hackers are from CMU, MIT,
Stanford, Utah, University of Southern ND at Hoople, ... :-)
The following remarks are based on 4 years with Franz, PSL, and Common
Lisp, reading as well as writing, but are nevertheless highly prejudiced.
> - How do you keep track of the side effects of destructive functions
> such as sort, nconc, replaca, mapcan, delete-if, etc?
There are very few circumstances when it is appropriate to use
destructive functions. There are two classes of exceptions: for efficiency,
or the algorithm depends on it. In the first case, you only use the
destructive operations on newly consed cells, and NEVER on things passed
in as function arguments. In the second case, you have lots of discussion
about why the algorithm needs destructive ops and how it uses them
("since our image is 1000x1000, we replace the pixels to avoid consing
another image...").
> - When should you use macros vs. functions?
Use macros only if you need new syntax, for instance a defining form
that your program uses a lot. In a game I wrote a while ago, there was
a macro called def-thing which took a bunch of numbers and symbols.
If it had been a function, the "'" key would have been worn out...
Sometimes macros are useful to represent a commonly appearing bit of
code that you don't want to call as a function. But this usually
loses on space what it gains in speed.
> - How do you reference global variables? Usually you enclose it
> in "*"s, but how do you differentiate between your own vars and
> Common LISP vars such as *standard-input*, *print-level*, etc?
Use "*"s, no differentiation.
> - Documentation ideas?
File headers are good, especially for programs that wander to different
operating systems. The commenting style in the Common Lisp book is good.
Documentation strings don't seem like a big win, but they probably make
more sense in very elaborate programming environments. I always put
doc strings on defvars.
> - When to use DOLIST vs MAPCAR?
Mapcar returns something, dolist doesn't. To return a list, mapcar
must cons a lot, and dolist doesn't cons at all. Consing is bad. :-)
> - DO vs LOOP?
Whatever turns you on.
> - Indentation/format ideas? Or do you always write it like the
> pretty-printer would print it?
I always write like the editor formats it. This can create problems
if two people are using different editors or different customizations
of the editor. What you see in the Common Lisp book is a good place
to start for getting your editor to indent properly.
Personally, I find it most readable to have a block of comments in
front of the function, then a blank line, then the function. I also
prefer to minimize the number of comments scattered about in the
function body. Frequently the structure of the function tells a lot,
but is obscured by comments inserted randomly. Consider, too, that
a 1-page function + 2 pages of comments = 3 pages of function, which
is *really* hard to read!
> - NULL vs ENDP, FIRST vs CAR, etc. Some would say "FIRST" is
> more mnemonic, but does that mean you need to use
> (first (rest (first X))) instead of (cadar X) ??
Null vs endp is pretty clearcut, since endp may error out, where null
would just return nil. No more than 1% of all Lisp programs will
behave predictably if they get a dotted list instead of a normal one,
but nobody seems to care...
On first vs car, everybody has their favorites. I prefer c...r combos,
but others hate it when I use cadr instead of second. Fortunately, such
circumstances are rare. If you feel the urge to put together a data
structure that has more than 2 pieces, use a defstruct. Your code will
be more readable *and* more efficient (since implementors can put in
all sorts of performance hacks for structures). If I were a manager,
I would fire anybody who used anything but car, cdr, and cadr (and they
wouldn't be saved by doing (car (car (cdr (cdr X)))) either!)
> - etc, etc.
> -Marty Hall
Avoid cond if you only have one test, use "if" instead. Saves two pairs
of parens and a "T"... (i.e. it's easier to read). Short functions are
better than long ones. In any competent Lisp implementation, the cost
of a function call is quite low, and shouldn't be considered.
I've only written a handful of functions longer than 20 lines...
Sequence functions and mapping functions are generally preferable to
handwritten loops, since the Lisp wizards will probably have spent
a lot of time making them both efficient and correct (watch out though;
quality varies from implementation to implementation).
More generally, Lisp programs usually benefit from the encouragement
of a "functional programming" style, where functions do few side-effects
that extend beyond the function's body. Easier to read, easier to debug,
easier to maintain.
Standard dictums of programming practice still apply in Lisp, i.e. always
put in a default case on any mult-way conditional - the constructs ecase
and ccase are useful in this respect. There are lots of others I don't
remember at the moment... somebody should write a book that concentrates
on Lisp programming instead of laundry listifying 400 functions...
stan shebs
------------------------------
Date: Sun, 11 May 86 21:02:05 CDT
From: David Chase <rbbb@rice-titan.ARPA>
Subject: String Reduction
See ``Equational Logic as a Programming Language'' by Mike O'Donnell. He
writes about using and implementing ``equational logic'', and this makes
heavy use of pattern matching and replacement (not necessarily string
reduction). (MIT Press, summer 1985)
Gyula Mago's FFP machine is a string reduction machine for functional
languages.
(I'm rather surprised that no one else mentioned these two references; are
these not what you had in mind?)
David Chase
Rice University
------------------------------
Date: Mon 12 May 86 18:15:47-CDT
From: CMP.BARC@R20.UTEXAS.EDU
Subject: Gold Hill PC Products
My listing of K:base as an expert system tool may have been out-of-date,
but not erroneous. Gold Hill Computers sent me a marketing brochure, as
well as a beta test request form. It was also listed in Lehner and Barth's
article in "Expert Systems", Oct. 1985. Has Gold Hill withdrawn K:base
from the market, or were they just overzealous in their advertising?
Mr. Miyata's cute reference to the apparently erroneous phone number would
have been less obscure if he had also listed the correct number(s), which are
(800) 242-LISP and (in MA) (617) 492-2071.
Dallas Webster
CMP.BARC@R20.UTexas.Edu
{ihnp4 | seismo | ctvax}!ut-sally!batman!dallas
------------------------------
End of AIList Digest
********************
∂15-May-86 1435 LAWS@SRI-AI.ARPA AIList Digest V4 #120
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 May 86 14:21:03 PDT
Date: Fri 9 May 1986 16:03-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #120
To: AIList@SRI-AI
AIList Digest Saturday, 10 May 1986 Volume 4 : Issue 120
Today's Topics:
Perception - Reductionist Predictions,
Policy - Improving Abstracts of Technical Talks,
Seminars - Analogical Reasoning (UPenn) &
Checking Goedel's Proof (SU) &
NL Interfaces to Software Systems (SU) &
The SNePS Semantic Network Processing System (ULowell)
----------------------------------------------------------------------
Date: Fri, 9 May 86 10:28:18 PDT
From: kube%cogsci@berkeley.edu (Paul Kube)
Subject: reductionist predictions: response to Nevin
From AIList Digest V4 #119:
>Date: Wed, 7 May 86 10:37:36 EDT
>From: Bruce Nevin <bnevin@cch.bbn.com>
>Subject: net intelligence
>A recent letter in Nature (318.14:178-180, 14 November 1985) illustrates
>nicely how behavior of a whole may not be predictable from behavior of
>its parts.
...
>The reductionist engineering prediction would be that the fish could
>respond no more quickly than its I/O devices allow, 2*10E-6 seconds.
>>From the reductionist point of view, it is inexplicable how the fish
>in fact responds in 4*10E-7 seconds. Somewhat reminiscent of the old
>saw about it being aerodynamically impossible for the bumblebee to fly!
...
> Bruce E. Nevin bnevin@bbncch.arpa
Of course, if you have a bad enough theory, it can get pretty hard to
figure out bumblebees, fish, or anything else. In this case, however,
predicting the behavior of the whole from the behavior of the parts
requires nothing more than the most elementary signal detection theory.
First, note that the fish does not respond in 4*10↑-7 seconds: the
latency of the jamming avoidance response (JAR) is only 3*10↑-1
seconds. What the fish is able to do is reliably detect temporal
disparities in signal arrival on the order of 4*10↑-7 seconds, and to
do this with arrival-time detectors having standard deviation of error no
better than 1*10↑-5 seconds. The standard, `reductionist engineering'
explication of this goes as follows:
The fish has 3*10↑-1 seconds to initiate JAR. In this time, it can
observe 100 electric organ discharges (EOD's) from the offending fish;
its job is to reliably and accurately (>90% confidence within 4*10↑-7
seconds) figure out disparities in arrival times of (some component
of) the discharges between different regions of its body surface.
This will be done by taking the difference in firing time of
discharge-arrival detectors which have standard deviation of error of
1*10↑-5 seconds.
It is well known that the average of N observations of a normally
distributed random variable with standard deviation sigma is sigma *
sqrt(N) / N; so here the average of the 100 observations of arrival
time of a single detector will have standard deviation sqrt(100)/100 *
1*10↑-5 = 1*10↑-6 seconds (and so 95% confidence intervals of two
standard deviations = 2*10↑-6 seconds, as reported by Rose and
Heiligenberg). Since the standard deviation of the difference of two
identically normally distributed random variables is twice the
standard deviation of either variable, the temporal disparity measurement
has 95% confidence interval of 4*10↑-6 seconds.
But that's only one pair of detectors, and the fish is paved with
detectors. If you want to reduce the 95% confidence interval by another
order of magnitude, you just need to average over 100 suitably located
detector pairs. (Mechanisms exploiting this fact are also almost
certainly responsible for some binaural stereo perception in humans,
where the jitter in individual phase-sensitive neurons is much worse
than what's required to reliably judge which ear is getting the
wavefront first.)
--Paul Kube
Berkeley EECS
kube@berkeley.edu
------------------------------
Date: 9 May 86 10:24:22 EDT
From: PRSPOOL@RED.RUTGERS.EDU
Subject: Abstracts of Technical Talks Published on AI-LIST
None of us surely, can attend all of the talks announced via the
AI-LIST. The abstracts which appear have served as a useful pointer for
me to current research in many different areas. I trust this has been
true for many of you as well. These abstracts could serve this secondary
purpose even better, if those people who post these abstracts to the
network, made an effort to include two addtional pieces of information
in them:
1) A Computer Network address of the speaker.
2) One or more references to any recently published material
with the same, or similar content to the talk.
I know that this information would help me enormously. I assume the
same is true of others.
Peter R. Spool
Department of Computer Science
Rutgers University
New Brunswick, NJ 08903
PRSpool@RUTGERS.ARPA
------------------------------
Date: Thu, 8 May 86 14:57 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Analogical Reasoning (UPenn)
CIS Colloquium - University of Pennsylvania
3:00pm Friday, May 9 - 216 Moore School
ANALOGICAL REASONING
Stuart Russell
Stanford University
I show the need for the application of domain knowledge in analogical
reasoning, and propose that this knowledge must take the form of a new class of
rule called a "determination". By giving determinations a first-order
definition, they can be used to make valid analogical inferences which may be
implemented within a logic programming system. In such a system, analogical
reasoning can be more efficient than rule-based reasoning for some tasks.
Determinations appear to be a common form of regularity in the world, and form
a natural stage in the acquisition of knowledge. The overall approach taken in
this work can be extended to the general problem of the use of knowledge in
induction.
------------------------------
Date: Thu 8 May 86 07:50:34-PDT
From: Anne Richardson <RICHARDSON@SU-SCORE.ARPA>
Subject: Seminar - Checking Goedel's Proof (SU)
Natarajan Shankar will be visiting CSD on Thursday, May 15. While here, he
will be giving the following talk:
DAY: May 15, 1986
EVENT: AI Seminar
PLACE: Bldg. 380, Room 380 X
TIME: 5:15
TITLE: Checking the Proof of Godel's Incompleteness Theorem
with the Boyer-Moore theorem prover.
PERSON: Natarajan Shankar
FROM: The University of Texas at Austin
There is a widespread belief that computer proof-checking of significant
proofs in mathematics is infeasible. We argue against this belief by
presenting a formalization and proof of Godel's incompleteness theorem
that was checked with the Boyer-Moore theorem prover. This mechanical
proof establishes the essential incompleteness of Cohen's Z2 axioms for
hereditarily finite sets. The proof involves a metatheoretic formalization
of Shoenfield's first-order logic along with Cohen's Z2 axioms. Several
derived inference rules were proved as theorems about this logic. These
derived inference rules were used to develop enough set theory in order
to demonstrate the representability of a Lisp interpreter in this logic.
The Lisp interpreter was used to establish the computability of the
metatheoretic formalization of Z2. From this, the representability of
the Lisp interpreter, and the enumerability of proofs, an undecidable
sentence was constructed. The theorem prover was led to the observation
that if the undecidable sentence is either provable or disprovable, then
it is both provable and disprovable. The theory is therefore either
incomplete or inconsistent.
------------------------------
Date: Thu, 8 May 86 13:17:29 pdt
From: Premla Nangia <pam@su-whitney.ARPA>
Subject: Seminar - NL Interfaces to Software Systems (SU)
COMPUTER SCIENCE DEPARTMENT
COLLOQUIUM
Speaker: C. Raymond Perrault
SRI International and CSLI
Title: A Strategy for Developing Natural Language Interfaces
to Software Systems
Time: Tuesday, May 27, 1986 --- 4:15 p.m.
Place: Skilling Auditorium
Refreshments: 3rd floor Lounge, Margaret Jacks Hall --- 3:45 p.m.
The commonly accepted perspective on the semantics of natural language
interfaces is that they are derived from the semantics of the underlying
software, e.g. a database. Although there appear to be computational
advantages to this position, it limits the linguistic coverage of the
interface and presents severe obstacles to their systematic construction
by confusing meaningful queries with answerable ones. We suggest
instead that interfaces be constructed by first defining the semantics
of the underlying software in terms of those of the interface language
and give criteria under which some of the computational advantage of
the meaningfulness-answerability confusion can be acceptably regained.
------------------------------
Date: Fri, 9 May 86 15:01 EDT
From: Graphics Research Lab x2681
<GRINSTEIN%ulowell.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - The SNePS Semantic Network Processing System
(ULowell)
The University of Lowell's continuing seminar series continues through the
summer with
THE SNePS SEMANTIC NETWORK PROCESSING SYSTEM
Stuart C. Shapiro
William J. Rapaport
Department of Computer Science
State University of New York at Buffalo
The SNePS Semantic Network Processing System is a semantic network
knowledge representation and reasoning system with facilities for build-
ing semantic networks to represent virtually any kind of information,
retrieving information from them, and performing inference with them.
Users can interact with SNePS in a variety of interface languages,
including a LISP-like user language, a menu-based screen-oriented edi-
tor, a graphics-oriented editor, a higher-order-logic language, and an
extendible fragment of English.
We will discuss the syntax and semantics of SNePS considered as an
intensional knowledge-representation system and provide examples of uses
of SNePS for cognitive modeling, database management, pattern recogni-
tion, expert systems, belief revision, and computational linguistics.
in Olney 428
on May 20, 1986
from 9:00 to lunch with refreshment breaks
at the University of Lowell (Lowell MA)
For further information call Georges Grinstein at 617-452-5000
------------------------------
End of AIList Digest
********************
∂15-May-86 1827 LAWS@SRI-AI.ARPA AIList Digest V4 #121
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 15 May 86 18:27:00 PDT
Date: Fri 9 May 1986 17:04-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #121
To: AIList@SRI-AI
AIList Digest Saturday, 10 May 1986 Volume 4 : Issue 121
Today's Topics:
Queries - Inferring Program Structure & Machine Translation & Prolog,
Literature - Object-Oriented Programming,
Expert Systems - Expert Systems and Decision Trees
----------------------------------------------------------------------
Date: 8 May 86 13:51:13 EDT
From: Angela.Hickman@ML.RI.CMU.EDU
Subject: References needed!
[Forwarded from the CMU bboard by Laws@SRI-AI.]
I recently received e-mail from a former professor asking for some
references. Below is part of his mail. If you know of any references in
this area, please send mail to ach@ml.
←←←←←
I have a student who will probably do a piece of work in software
engineering. The idea is to take existing programs and try to infer data
structures and data flow. The trick is that these will be large program
systems (many modules), written and "enhanced" (~= "modified" or
"corrected") by many people over some time. Furthermore, they will not have
been designed or built with any of the modern software development
methodologies. In short, they will be real programs that have been
maintained by many people.
Part of the work may involve expert systems and AI work to develop a rules
base to infer the structure. Do you know of anyone doing work in any of
these areas?
------------------------------
Date: 9 May 86 03:27:00 EDT (Fri)
From: Hideto Tomabechi <tomabechi@YALE.ARPA>
Subject: machine translation
I would like to know what types of machine translation projects are
underway now, especially in universities. I have been working on
English-Japanese translation myself. I hope to share our opinions in
this field. If anyone is currently working on machine translation, I
would appreciate it if I can receive some information about your on-
going project.
Hideto Tomabechi
Yale University
tomabechi@yale.arpa
------------------------------
Date: 8 May 86 02:04:05 GMT
From: decwrl!glacier!oliveb!bene!luke!itkin@ucbvax.berkeley.edu
Subject: looking for Prolog
I'm looking for a version of Prolog. The machines available to me
include an AT&T 7300 (Unix PC), AT&T 3B5, AT&T 3B2, Plexus P/60, Plexus
P/35, IBMPC, and AT&T 6300PC (IBMPC compatible). I've spoken with
someone from AT&T who suggests that Quintus may be porting to the 7300.
I've spoken with someone from Quintus who says there is no port and no
contract at this time. I've heard of something called C-Prolog, but
don't know for sure what it is.
What I'm looking for is a system on which I can begin to learn Prolog
and prototype some applications. Any help will be GREATLY appreciated.
Public domain or commercial is fine, as long as the price is reasonable
or I can convince my employer.
advTHANKSance
--
***
* Steven List @ Benetics Corporation, Mt. View, CA
* Just part of the stock at "Uncle Bene's Farm"
* {cdp,engfocus,idi,oliveb,plx,tolerant}!bene!luke!itkin
***
------------------------------
Date: Wed 7 May 86 23:25:20-PDT
From: Hiroshi G. Okuno <Okuno@SUMEX-AIM.ARPA>
Subject: Re: What's a good book on Object-Oriented Programming
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
If you can read Japanese, I would recommend you the following book:
"Object-Oriented Programming" edited by Norihisa Suzuki, Kyoritsu
Publishing Co., (Dec. 1985), 2,500 yen (about $14.00).
Contents: Introduction to Smalltalk, Actor, TAO, Concurrent Smalltalk,
Prolog environments written in Smalltalk, CAI on Physics written in
object-oriented system, etc.
Why do I recommend you this book? Of course, because I'm one of
co-authors.
P.S. Sayuri (Nishimura@sumex) and Masafumi (Minami@sumex) have a book.
- Gitchang -
------------------------------
Date: Thu 8 May 86 08:40:36-PDT
From: Bob Engelmore <Engelmore@SUMEX-AIM.ARPA>
Subject: Re: What's a good book on Object-Oriented Programming
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
And if you can't read Japanese, I recommend reading the article by
Mark Stefik and Danny Bobrow in the AI Magazine, Vol. 6, No.4,
Winter 1986. However, I'm biased about articles in that rag.
rse
------------------------------
Date: Thu 8 May 86 08:57:37-PDT
From: Mark Richer <RICHER@SUMEX-AIM.ARPA>
Subject: Re: What's a good book on Object-Oriented Programming
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
There's a new introductory book (available now or soon) on SMalltalk
by Kaehler, Ted and (I think) Patterson, Dave that is supposed to be a
very good and relatively inexpensive book on Smalltalk, the
"protypical" object-oriented programming language. Of course, there
is also the Addison-Wesley series on Smalltalk, more expensive,
detailed, and harder to carry around with a bag of groceries.
mark
------------------------------
Date: Thu 8 May 86 10:39:36-PDT
From: Marvin Zauderer <ZAUDERER@su-sushi.ARPA>
Subject: Re: Good book on object-oriented programming
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
Mark Richer's correct; the new Smalltalk book, "A Taste of Smalltalk",
was written by Ted Kaehler and Dave Patterson. I don't believe Addison-
Wesley has published it yet, although I was told it's due "sometime
in 1986." It's a good introduction to Smalltalk; actually, it's more
of an introduction to Smalltalk than a detailed explanation of object-
oriented programming, although any introduction to Smalltalk
necessarily involves a (brief) introduction to obj-or programming.
-- Marvin
------------------------------
Date: Fri, 9 May 86 04:53:30 EDT
From: ihnp4!lzaz!psc@seismo.CSS.GOV
Subject: Non-trivial expert systems and decision trees - THE RESPONSES!
How pleasant! I got several thoughtful replies to my comments
on experts systems and decision trees. I'd like to thank all
the people who sent me mail. I've summarized the responses
below. I never got what I was *really* looking for, namely, a
good benchmark expert system. More on that after the summary.
The general consensus is that rule based expert systems offer no
more power than decision trees, in just exactly the same way
<your favorite programming language> offers no more power than a
Turing machine. Of course, there are advantages. . . .
===== discussing the problem off the net, I wrote:
My "crisis of faith" has two prongs on it. First, it seems you could
write a production compiler to generate the decision tree from the
productions. The compiler would need a lot of resources, but the
resulting "compiled" expert system would run quickly and in very little
memory. (Other, tougher objections: user can't direct search,
uncertainty hard to track, and only works with forward chaining.)
[Note: going over some other mail I'd received, I discoved an
expert system shell named Radian does just this. It translates
a set of rules into a decision tree implemented as a C program!
The resulting program can explain itself. psc]
Second and more disturbing, *every* example "expert system" I've ever
seen uses productions was written by someone who *first* drew a decision
tree! That's clearly missing the point. I'm looking for systems that
are "non-trivial", not in the sense that they have a lot of rules, but
in the sense aren't just solving a problem that's *better* solved by a
straightforward decision tree. Know of any?
===== Dale Skran (ihnp4!mtgzz!dls):
In general all expert systems can be reduced to
tree searches which can be mapped into pattern matching
operations. . . . The real savings of rule based systems
is that you just add rules and skip the tree.
===== dchandra@TRILLIAN.ARPA identified five kinds of rules:
a) rules for strategy (meta rules)
b) rules for inheritance between objects.
c) rules for normal inference (equivalent to decision tree)
d) rules which create new rules (we have built a rule shell
called IMST which provides this feature. ) We have a system
called CDLII which uses this feature to post constraints.
e) rules can exist in packets and can communicate
through global and local blackboards. Decision trees
do not have a notion of private and global databases.
Decision trees emulate only part (c) above. . . .
Consider this statement: All non-lisp machine AI programs get
compiled into assembly language. So what is so great about lisp.
LISP IS A DATA ABSTRACTION ABOVE ASSEMBLY.
RULEBASED SYSTEMS ARE A ABSTRACTION ABOVE DECISION TREES OR
OTHER LOW LEVEL STUFF.
===== Jean-Francois Lamy <ihnp4!utcsri!lamy%utai>
It does seem to turn out that once you have written down all the
rules and got the system to work the way you want you now
understand the problem well enough that you don't need the fancy
and inefficient AI solution anymore.
One has to realize that not all problems are amenable to
formulation using the brain-damaged OPS5-like production rules
systems. In particular, problems which require a HUGE amount of
implicit knowledge about the world don't quite fit. Consider
story understanding or finding causal relationships in data that
requires multiple forms of reasoning (e.g. heart physical
malfunction, electrical malfunction, chemical unbalance).
===== Bruce Morlan (pur-ee!rutabaga) goes out on a limb for trees:
At risk of being burned for heresy, I would claim (in my dissertation
I will claim) that there is no significant difference between the
following three systems:
(0) rule-based expert systems,
(1) production systems,
(2) decision trees.
This is consistent with results documented in many places, and I would
refer first to Vol I of "The Encyclopedia of AI" for my first support.
This claim extends to experts systems with uncertainty, such as of
the MYCIN or PROSPECTOR class. In my research I have concluded that
the collection of rules from an expert must result in an data suitable
for use in a Markovian decision process.
Whether this applies to ←all← expert systems remains to be seen, and
I would be very interested in hearing about a system that didn't fit
this mold (as you alluded to in your posting).
===== Ehud Reiter (ihnp4!seismo!harvard!reiter):
Decision trees are both very useful and non-trivial to program
if you want to do it "right" (backward chaining, truth
maintainance, interactive graphical tree editing, multiple
solutions, explanations, etc. - I know because I've tried to
implement one). Whether marketing calls the program a "decision
tree" (which they should) or an "expert system" (which means
more sales) is irrelevant - it's still a useful but complex piece
of code.
===== Donald R. Tveter (ihnp4!bradley!drt) takes a useful step backwards:
In going thru graduate school and taking some AI courses,
it came to me that what I was seeing in AI courses, I had seen
before. I found the principles in an old Psychology book, I
had once read: Psychology, by William James, first published in
the 1890's. In his chapter on Association, he showed how people
think. A careful comparison between what he said then and what
people do now in their expert systems, shows up no significant
differences.
===== Mark R. Leeper (ihnp4!mtgzz!leeper):
Don't YOU make your diagnostic inferences by a decision tree?
It may not be a binary tree, but then expert systems don't have
to use binary trees either.
=====
Only Dale Skran suggested a benchmark expert system: the
"monkeys and bananas" problem. Usually shown in OPS5, this
system has a hungry monkey, a locked vault on the ceiling with
bananas, another locked vault with the key to the first, and a
ladder. (I may have forgotten a vault or key or two).
I'm not at all sure that the PC-based expert systems I'll be
reviewing can handle that problem! The difficulty is keeping
track of changing values (the monkey and the ladder move a
*lot*!) The one system I'm using now doesn't get past the
monkey's first move (in a very simplified version.) Of course,
if a particular expert system shell can't handle this problem,
that's useful information, too!
As a brute-force synthetic benchmark, I'm going to have the
expert system traverse a network of nodes equivalent to the
Towers of Hanoi puzzle, with some "cuts" (forbidden moves) that
force it to make twice as many moves as necessary. (In fact, it
must do the equivalent of moving the disks to the middle peg
first.) Both the cuts and resulting network are symmetrical,
keeping the comparison fair for forward- and backward-chaining
systems. A picture is worth a few thousand words: see Figure
2-2, page 82, in Nilsson's PROBLEM SOLVING METHODS IN ARTIFICIAL
INTELLIGENCE.
And I'll use the travel advisory system in the latest issue of
PC, if that doesn't require access to a full database system
(which only Guru has.)
I'm still not satisfied; any suggestions for benchmarks?
Thanks again for your comments.
---
-Paul S. R. Chisholm, UUCP {ihnp4,cbosgd,pegasus,mtgzz}!lznv!psc
AT&T Mail !psrchisholm, Internet mtgzz!lznv!psc@topaz.rutgers.edu
The above opinions may not be shared by any telecomm company.
AT&T Transaction Services - the right choice for point-of-sale networking.
------------------------------
End of AIList Digest
********************
[RV - Deleted duplicate issues V4 #120, #121.]
∂20-May-86 0132 LAWS@SRI-AI.ARPA AIList Digest V4 #124
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 20 May 86 01:32:00 PDT
Date: Mon 19 May 1986 23:21-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #124
To: AIList@SRI-AI
AIList Digest Tuesday, 20 May 1986 Volume 4 : Issue 124
Today's Topics:
Queries - Expert Systems for Theoretical Mathematics &
Logic/Functional Languages & VAX VMS Lisp & Bray Reference,
AI Tools - Common LISP Style & Prolog for AI Book & Turbo Prolog
----------------------------------------------------------------------
Date: 14 May 86 04:34:51 GMT
From: ihnp4!alberta!sask!kusalik@ucbvax.berkeley.edu (Tony Kusalik)
Subject: Expert Systems (info wanted)
I am looking for any pointers/info on
past/existing/prospective expert systems
for theoretical mathematics written in Prolog
or other languages based on logical inference.
thanks.
Tony Kusalik
kusalik@sask.bitnet
...!{ihnp4,ubc-vision,alberta}!sask!kimnovax!kusalik
------------------------------
Date: Thu, 15 May 86 21:14 EDT
From: Paul Fishwick <Fishwick%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Logic/Functional Languages?
Does anyone on the list know of available languages incorporating both
logic and functional programming (preferably in a Unix 4.2 environment
or possibly an IBM/PC)? Specifically, I'm looking for one or both
of the following:
1) Some version of Prolog embedded within Common Lisp (I've heard of
LISPLOG or POPLOG - anyone have any experience with these?). A
set of add-on macros or function library to an already extant
lisp would be best.
2) Any of the systems discussed in the book "Logic Programming -
Functions, Relations, and Equations" by DeGroot & Lindstrom.
Has anyone produced any large applications with these hybrid systems?
Are the benefits derived from the systems *significant* (over using, say,
vanilla lisp or prolog)? If I get enough replies, I will post a summary
of names and addresses where these languages can be obtained...Thanks.
-paul
------------------------------
Date: Thu, 15 May 86 18:27:04 PDT
From: larry@Jpl-VLSI.ARPA
Subject: VAX VMS LisP
Are there any Common LisPs for the VAX under VMS? (DEC's VAX LisP is an
Ultrix product only, so far as I know.)
If there's no (decent) Common LisP, what is the best choice?
Larry @ jpl-vlsi.arpa
------------------------------
Date: 16 May 86 15:09:44 PDT (Friday)
From: Hoffman.es@Xerox.COM
Subject: Bray reference
I'm looking for more details on one reference in the recent
bibliographies. I didn't save them, so here is my own version:
Bray, M. and G. Schmidt (editors), 'Proceedings of NATO
Summer School on Theoretical Foundations of Programming
Methodology', Dordrecht: Riedel (1982).
With just this much, librarians can't seem to find any listing for it.
Can anyone supply more information?
Thanks,
--Rodney Hoffman
------------------------------
Date: 14 May 86 00:59:00 GMT
From: pur-ee!uiucdcs!uiucdcsp!bsmith@ucbvax.berkeley.edu
Subject: Re: Common LISP style standards.
A couple of short comments.
First, about comments. You might want to embed into a function
a string that will print out as on the fly documentation if the system
supports it (Symbolics does). This helps when using a function you wrote
2 months earlier that's lost somewhere in 200 pages of code.
Second, there are a couple of rules about using conditionals that
make a lot of sense. If you have a single condition followed by a single
then statement followed by a single else statement, use "if." If you have
a single condition followed by a single then statement and no else
statement, use "when." If you have a single negative condition followed
by a single then statement, use "unless." If you have multiple conditions,
or need to use progn anywhere, a cond is more readable.
------------------------------
Date: 16 May 86 19:27:00 GMT
From: pur-ee!uiucdcs!uiucdcsb!mozetic@ucbvax.berkeley.edu
Subject: New book: Prolog for AI
Addison-Wesley published a new book:
PROLOG Programming for Artificial Intelligence by Ivan Bratko
The first part introduces Prolog and shows how Prolog programs
are developed.
The second part applies Prolog to some central areas of AI, and
introduces fundamental AI techniques through complete Prolog
programs. Throughout the book there is a lot of exercies and
sample programs. The following is a table of contents:
THE PROLOG LANGUAGE
1. An Overview of Prolog
2. Syntax and Meaning of Prolog Programs
3. Lists, Operators, Arithmetic
4. Using Structures: Example Programs
5. Controlling Backtracking
6. Input and Output
7. More Built-in Procedures
8. Programming Style and Technique
PROLOG IN ARTIFICIAL INTELLIGENCE
9. Operations on Data Structures
10. Advanced Tree Representations
11. Basic Problem-Solving Strategies
12. Best-first: A Heuristic Search Principle
13. Problem Reduction and AND/OR Graphs
14. Expert Systems
15. Game Playing
16. Pattern-Directed Programming
------------------------------
Date: Tue, 13 May 86 00:49:19 PDT
From: newton@vlsi.caltech.edu (Mike Newton)
Subject: Review of Turbo-Prolog
[This is a review of Turbo Prolog. I have *not* read all of the
manual, nor used it on many programs. Views expressed are from the
perspective of someone who has done the code generation and
evaluatable predicates for a high speed (810 KLips on one processor
of a IBM 3090) prolog compiler. I have no affiliation with
Borland, and only a (*very*) indirect affiliation with IBM -- MON]
From a local software store we purchased Turbo Prolog over the weekend.
It came as a cellophane wrapped book with a couple of floppies. It cost
$69.95, list of $99.
The enviromnent was very nice. There was a window for the editor, goals
debugging information and messages. This seemed well done, and responded
reasonably well (I am not used to IBM-PC's.)
The unfortunate part was the Pascal-ization of the language. Everything
had to be typed (they called it domains). As far as I could tell, lists
had to be composed solely of other lists or elements all of one type. One
had to define the possible terms (giving the functor) that could be
arguments to a predicate. It seemed impossible to write generic predicates
for dealing with arbitrary types of terms.
Ex: to have a term that could be a 'symbol' (atom) or an integer
one had to do this:
domains
aori = a(atom) or i(integer)
It was not possible to just use an atom or an integer as a subterm...
Typing each subterm of a term is not my idea of Prolog.
After about an hour we got the 'standard' timing example of naive
reverse running (Some people have used other, non-environment-creating
samples. This is an unfair comparison). It did 496 unifications in
aproximately 11/100 of a second. This amounts to a speed of a little
under 5 Klips. Considering that they do not need to do 'real' unification
(since everything is pre-typed, and thus can be reduced to a simple test),
this speed is not particularly impressive.
- mike
newton@cit-vax.caltech.edu {ucbvax!cithep,amdahl}!cit-vax!newton
Caltech 256-80 818-356-6771 (afternoons,nights)
Pasadena CA 91125 Beach Bums Anonymous, Pasadena President
------------------------------
Date: 16 May 86 09:24:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Preliminary notes on Turbo Prolog
Quickie Review of Turbo Prolog
This is a rough set of notes about Turbo Prolog (hereafter TP).
It is a *linguistic* comparison of TP vs. the Clocksin & Mellish
book (CM) and Pereira's CProlog (CP). It is based mostly on a
reading of the TP manual, not live experimentation with the
product itself. There is no evaluation of performance, nor much
of the programming environment provided by TP.
TP is related to, but by no means compatible with, either CM or CP.
In the list below, I've put TP/CP differences first, and then
TP enhancements.
1. Declarations.
Structures must have the types of their arguments declared. That
is, you can't just toss in compound terms in facts and rules.
The functors for all predicates must be declared, together with
role names for each of the arguments, in a PREDICATES section,
like this:
predicates
person(name, height, weight, hair←color)
name2(last, first)
name3(last, first, middle)
and each role must declare from which domain(s) it is drawn,
in a DOMAINS section (which must precede the predicates section);
very relational databasy:
domains
name = name2(last, first); name3(last, first, middle)
/* name is either a name2 or name3 structure. */
first, middle, last = symbol
/* first, middle, and last are all (atomic) symbols. */
height = integer
weight = real
hair←color = symbol
What's normally thought of as the regular program is contained
in a CLAUSES section, following the two above.
There are five primitive atomic data types (integer, real, char,
string, symbol), and everything is built from these.
A given domain may consist of a single primitive type or a
disjunction of compound types, but *not* a disjunction of
primitive types. Since lists are declared like this:
numlist = integer* /* numlist is a list of integers */
it appears that lists must be relatively homogeneous, ie,
must contain elements of either a single primitive type, or
a few compound types. The whole flavor is much more that of
compilation, data definition, Pascal, and type-checking, than of
the usual interpreted, free-spirited CP or CM. Thus TP stresses
documentation, security, and efficiency, but disables some
dynamic data building features.
2. Declaring the use of Arguments.
When an argument may be passed unbound from one sub-goal to
another, it must be declared as a *reference* whatever, back up
in the domain declarations to tell TP to pass it be reference,
since TP otherwise assumes it can be passed by value (i.e.
already instantiated), eg:
domains
height = reference integer /* height is a pointer to an integer */
tree = reference node(integer, tree, tree)
/* note recursive structure */
3. No meta-logical probing of the DB.
There is nothing like CP's predicates: =.., functor, arg,
clause, or current←functor for fiddling with the current
program's clauses. In general, there's is no run-time
inspection or building of rules. Assert and retract work
only for facts.
4. There are no user-defined operators, nor CP's expand←term
for pre-processing.
5. Syntax shuffling -
TP CP
--------- --------
bound, free nonvar, var
< @< /* for strings and symbols */
= is /* numeric computation */
= =:= /* numeric test */
equal = /* term unification */
bitand, bitor /\, \/
bitnot \
bitleft, bitright <<, >>
Also, all rules for the same predicate must be lexically
contiguous, and you can use the keywords "and", "or", and "if"
instead of the symbols ",", ";", and ":-".
6. TP implements CM's findall, rather than CP's setof and bagof.
7. TP has lots of features for handling files and I/O.
Its predicates for input (read←x), however, expect to know
the type of object, eg, readint, readreal. TP doesn't have
CP's get for single character input. It does have readln
to read an entire line into a string.
8. TP has goodies to handle (fixed-format) databases on
disk. Eg, dbassert/dbretract add/delete facts to/from
an external (permanent) database.
9. TP has features for program modularization. Each module
can be compiled independently, and has its own name space,
eg, for domains and predicates. There is also a way to set up
global domains and predicates visible to all modules.
10. TP handles character-strings as a full-fledged data-type.
Also it has functions for conversion among the primitive types.
11. TP has predicates to control graphics, windows, sound, etc.
12. The TP editor is Wordstar-like, not especially Prolog-oriented.
The opinions expressed herein have been officially approved and
sanctioned by the Supreme Court, both houses of Congress,...
no, no, only kidding, only kidding, actually I didn't even
consult with them - please don't blame anyone but me.
John Cugini <Cugini@NBS-VMS>
Institute for Computer Sciences and Technology
National Bureau of Standards
------------------------------
End of AIList Digest
********************
∂20-May-86 0405 LAWS@SRI-AI.ARPA AIList Digest V4 #125
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 20 May 86 04:05:13 PDT
Date: Mon 19 May 1986 23:30-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #125
To: AIList@SRI-AI
AIList Digest Tuesday, 20 May 1986 Volume 4 : Issue 125
Today's Topics:
Queries - Neural Networks & Inside and Outside & EURISKO &
Strength of Chess Computers & Conway's Game of LIFE & Prolog nth,
Replies - Prolog nth & Neural Networks & Shape & Conway's Game of LIFE,
AI Tools - PCLS Common Lisp
----------------------------------------------------------------------
Date: 11 May 86 09:02:16 GMT
From: tektronix!uw-beaver!bullwinkle!rochester!seismo!gatech!akgua!whu
xlm!whuxl!houxm!mtuxo!orsay@ucbvax.berkeley.edu (j.ratsaby)
Subject: Re: neural networks
>
> Stephen Grossberg has been publishing on neural networks for 20 years.
> He pays special attention to designing adaptive neural networks that
> are self-organizing and mathematically stable. Some good recent
> references are:
I would like to ask you the following:
From all the books that you read,was there any machine built or simulation
that actually learned by adapting its inner structure ?
if so then what type of information was learned by the machine and in what
quantities ? what action was taken to ask the machine to "remember" and
retrive information ? and finally , where are we standing today,that is, to
your knowledge, which is the machine that behaves the closest to the
biological brain ?
I would very much apreciate reading some of your thoughts about the above,
thanks in advance.
joel Ratsaby
!mtuxo!orsay
------------------------------
Date: Wed 14 May 86 17:36:44-PDT
From: Pat Hayes <PHayes@SRI-KL>
Subject: inside and outside
Dr. Who's Tardis seems to have a larger interior than exterior. People find
this not outrageously unintuitive, and I am trying to understand why. Which
of the following 'explanations' do people find intuitively satisfying?
1. the inside is just larger than the outside, thats all.
2. there is a different kind of space inside the Tardis, so more can
be fitted into it.
3. the 'interior' isnt inside the police box at all, its somewhere else,
and the door is a transporter device.
4. the door changes sizes, shrinking things on the way in and magnifying
them on the way out, and the interior is built on a small scale. ( As in
Disneys 'fantastic voyage' )
5. something else ( what? )
This particular idea recurs in folklore and childrens fantasy, whereas other
equally impossible concepts are met with less often ( something being in two
places at once, for example ). This suggests that it might illustrate a
natural separation between different parts of our spatial intuition.
Send intuitions, explanations, comments to PHAYES@SRI-KL. Thanks.
Pat Hayes
SPAR
------------------------------
Date: 15 May 86 07:45:26 GMT
From: ingres.Berkeley.EDU!grady@ucbvax.berkeley.edu (Steven Grady)
Subject: AI in IAsfm
In the June 86 issue of IAsfm ,there's a fascinating article on AI and
common sense. In this article, the author mentions a program called
Eurisko, which I had heard about before briefly, but which I'm now
reminded of. Do people have references to this? How can I find out
more about it?
Steven Grady
...!ucbvax!grady
grady@ingres.berkeley.edu
[I've sent Steven the list of Lenat references that appeared in
#2/117, 12 Sep 1984. The most pertinent is
D. B. Lenat, "EURISKO: A Program that Learns New Heuristics and
Domain Concepts," Journal of Artificial Intelligence, March 1983.
Also available as Report HPP-82-26, Heuristic Programming Project,
Dept. of Computer Science and Medicine, Stanford University, Stanford,
California, October 1982.
-- KIL]
------------------------------
Date: Fri, 16 May 86 16:50:48 PDT
From: cracraft@isi-venera.arpa
Subject: strength of commercially available chess computers
This message is primarily addressed to Tony Marsland, AILIST member,
but is of general interest to the rest of the list as well.
Tony, for our readers, what are the three strongest available
chess machines according to the Swedish article in the most
recent issue of the ICCA Journal?
I was told today by a third-party (THE PLAYERS company in
Los Angeles), that the Novag Constellation Expert is
approximately 2100 in chess strength. I find this impossible
to believe because it runs on a tiny 8-bit processor at
1/50,000th the speed of a Belle, Cray Blitz, or Hitech
which barely pass 2100 in chess strength. It should be
noted that Hitech's rating is based on very few
tournament games. The same is true of Cray Blitz. Only
Belle has a sufficient base to qualify its 2200 rating claim.
Well, Tony, what are they? Take care.
Stuart Cracraft
------------------------------
Date: 16 May 86 15:21:15 GMT
From: allegra!mit-eddie!genrad!panda!husc6!harvard!seismo!mcvax!ukc!re
ading!onion.cs.reading.AC.UK!scm@ucbvax.berkeley.edu (Stephen Marsh)
Subject: Conway's game of LIFE
Here's an enquiry about John Conway's game of LIFE,
a simulation of the birth, life and death of cells placed on
a grid. It was devised about 1970 and was based on the theory
of cellular automata. It became of great interest to a large
number of people after it was discussed by Martin Gardner
in Scientific American (Oct 1970-Mar 1971).
I would like to know if anyone has done or knows of
any investigation into aspects of the LIFE simulation since
the outburst of interest in 1970. If they have, or know of
any book that contains a (not too theoretical) run-down of
cellular automata, perhaps with reference to LIFE, could they let
me know.
Many thanks
Steve Marsh
scm@onion.cs.reading.uk
Steve Marsh,
Department of Computer Science,
PO BOX 220,
University of Reading,
Whiteknights,
READING UK.
------------------------------
Date: 14 May 86 09:00:11 GMT
From: allegra!mit-eddie!genrad!panda!husc6!harvard!topaz!lll-crg!boote
r@ucbvax.berkeley.edu (Elaine Richards)
Subject: HELP!!!!!
I have been given this silly assignment to do in Prolog, a language which
is rapidly losing my favor. We are to do the folowing:
Define a Prolog predicate len(N,L) such that N is the length of list L.
Define a Prolog predicate nth(X,N,L) such that X is the Nth element of
list L.
I cannot seem to instantiate N past the value 0 on any of these.
My code looks like this:
len(N,[]) :- 0 !.
len(N,[←|Y] :- N! is N + 1,len(N1,L].
It gives me an error message indicating "←6543etc" or somesuch ghastly
number/variable refuses to take the arithmetic operation.
The code for nth is similar and gives a similar error message.
Please send replies to {whateverthepathis}lll-crg!csuh!booter.
E
*****
------------------------------
Date: 16 May 86 19:48:00 GMT
From: pur-ee!uiucdcs!uiucdcsb!mozetic@ucbvax.berkeley.edu
Subject: Re: HELP!!!!!
% How about the following:
len( 0, [] ).
len( N, [← | L] ) :- len( N0, L ), N is N0 + 1.
nth( X, 1, [X | ←] ).
nth( X, N, [← | L] ) :- N > 1, N0 is N - 1, nth( X, N0, L ).
------------------------------
Date: 14 May 86 20:44:09 GMT
From: tektronix!uw-beaver!bullwinkle!rochester!seismo!lll-crg!topaz!ha
rvard!bu-cs!jam@ucbvax.berkeley.edu (Jonathan A. Marshall)
Subject: Re: neural networks
In article <1583@mtuxo.UUCP> orsay@mtuxo.UUCP (j.ratsaby) writes:
> In article <538@bu-cs.UUCP> jam@bu-cs.UUCP (Jonathan Marshall) writes:
>>
>> Stephen Grossberg has been publishing on neural networks for 20 years.
>> He pays special attention to designing adaptive neural networks that
>> are self-organizing and mathematically stable. ...
>
> I would like to ask you the following:
> From all the books that you read,was there any machine built or simulation
> that actually learned by adapting its inner structure ?
TRW is building a chip called the MARK-IV which implements some of
Grossberg's earlier adaptive neural networks. The chip basically acts
as an adaptive pattern recognizer.
Also, Grossberg's group, the Center for Adaptive Systems, has
simulated some of his parallel learning algorithms in software. In
particular, "masking fields" have been applied to speech-recognition,
the "boundary contour system" has been applied to visual pattern
segmentation, and other networks have been applied to symbolic
pattern-recognition.
> if so then what type of information was learned by the machine and in what
> quantities ? what action was taken to ask the machine to "remember" and
> retrive information ? and finally , where are we standing today,that is, to
> your knowledge, which is the machine that behaves the closest to the
> biological brain ?
> I would very much apreciate reading some of your thoughts about the above,
> thanks in advance. joel Ratsaby !mtuxo!orsay
The network simulations learned to discriminate patterns based on
arbitrary similarity measures. They also performed associative
learning tasks that explain psychological data such as "inverted U,"
"overshadowing," "attentional priming," "speed-accuracy trade-off,"
and more. The networks learned and remembered spatial patterns of
neural activity. The networks then later retrieved the patterns, using
them as "expectation templates" to match with newer patterns. The
degree of match or mismatch determined whether (1) the newer patterns
were represented as instances of the "expected" pattern, or (2) a fast
parallel search was initiated for another matching template, or (3)
the new pattern was allocated its own separate representation as an
unfamiliar pattern.
One of Grossberg's main contributions to learning theory has been the
design of self-organizing associative learning networks. His networks
function more robustly than most other designs because they are
self-scaling (big patterns get processed just as effectively as small
patterns), self-tuning (the networks dynamically adjust their own
capacities to simultaneously prevent saturation and suppress noise),
and self-organizing (learning occurs within the networks to produce
finer or coarser pattern discriminations, as required by experience).
Grossberg's mathematical analyses of "mass-action" systems enabled him
to design networks with these properties.
In addition, his networks are physiologically realistic and unify a
great deal of otherwise fragmented psychological data. Read one or
two of his latest papers to see his claims.
The question of which ←machine← behaves closest to the biological
brain is not yet appropriate. The candidates I know of are all
software simulations, with the possible exception of the TRW Mark-IV,
which is quite limited in capacity. Other schemes, such as Hopfield
nets, are not mass-action (in the technical sense) simulations, and
hence fail to observe certain kinds of local-global tradeoffs that
characterize biological systems.
However, the situation is hopeful today. More AI researchers have
been recognizing the importance of studying biological systems in
detail, to gain intuition and insight for designing adaptive neural
networks.
------------------------------
Date: Sat, 17 May 86 18:43:44 pdt
From: John B. Nagle <jbn@su-glacier.arpa>
Subject: Geometry-oriented AI
There are some ideas worth pursuing here. There is a class of
problems for which solid geometric modeling, rather than predicate
calculus, seems an appropriate underlying model. The hook and ring
problem seems to be of this type. Alex Pentland at SRI has done some
work on concise mathematical representations of the physical universe,
and I suspect that a system that could manipulate objects in Pentland's
representation, calculating interferences and contacts, driven by various
search strategies, would be an appropriate way to attack the hook and
ring problem.
One can dimly imagine a solid geometric modelling system with
approximate representations a la Pentland ("fuzzy solid modelling?")
enhanced by some notions of force, strength of materials, and inertia,
as a base for working on such problems. Unlike the Blocks World and
its successors, where the geometric information was transformed to
expressions in predicate calculus as soon as possible, I'm suggesting
that we stay in the 3D geometric domain and work there. We might even
want to take problems that are not fundamentally geometric and construct
geometric analogues of them so that geometric problem solving techniques
can be applied. (Please, no flames from the right brain/left brain
crowd). Has anyone been down this road yet and actually implemented
something?
Interesting thought: could the new techniques for performing
optimization calculations being developed
by the neural-nets people be applied to the computationally-intensive
tasks in solid geometric modelling? I suspect so, especially if we are
willing to accept approximate answers ("Will the couch fit through the
door?" might return "Can't be sure; within .25 inch error tolerance")
some of the closed-loop feedback analog techniques proposed may be applicable.
The big bottleneck in solid geometric modelling is usually performing the
interference calculations to decide what is running into what. The
brain is good at this, and probably doesn't do it by number-crunching.
John Nagle
415-856-0767
------------------------------
Date: 19 May 86 09:05:16 GMT
From: brahms!weemba@ucbvax.berkeley.edu (Matthew P. Wiener)
Subject: Re: Conway's game of LIFE
I'm directing followups to net.games only.
A good reference to LIFE:
Berlekamp, Elwyn R ; Conway, John H ; Guy, Richard K
Winning Ways II: Games in Particular
Academic Press 1982
The last chapter is devoted to the proof that LIFE is universal.
The rest of the book is worth reading anyway. You will learn why
E R Berlekamp is the world's greatest Dots-and-Box player, for
example.
A good reference to cellular automata:
Farmer, Doyne ; Toffoli, Tommaso ; Wolfram, Stephen ; (editors)
Cellular Automata: Proceedings
North-Holland 1984
The latter is a reprint of Physica D Volume 10D (1984) Nos 1&2.
Mostly technical, with interest in physical applications, but the
article by Gosper on how to high speed compute LIFE is quite
intriguing and readable.
Also, Martin Gardner occasionally had an update after his original
article. His newest book, "Life, Wheels, and other Mathematical
Amusements" (???), reprints the latest.
ucbvax!brahms!weemba Matthew P Wiener/UCB Math Dept/Berkeley CA 94720
------------------------------
Date: Thu, 15 May 86 16:56:12 MDT
From: shebs@utah-cs.arpa (Stanley Shebs)
Subject: PCLS Common Lisp Available
This is to announce the availability of the Portable Common Lisp Subset
(PCLS), a Common Lisp subset developed at the University of Utah which
runs in Portable Standard Lisp (PSL).
PCLS is a large subset which implements about 550 of the 620+ Common Lisp
functions. It lacks lexical closures, ratios, and complex numbers. Streams
and characters are actually small integers, some of the special forms
are missing, and a number of functions (such as FORMAT) lack many of the
more esoteric options. PCLS does include a fully working package system,
multiple values, lambda keywords, lexical scoping, and most data types
(including hash tables, arrays, structures, pathnames, and random states).
The PCLS compiler is the PSL compiler which produces very efficient code,
augmented by a frontend that does a number of optimizations specific to
Common Lisp. Gabriel benchmarks and others show that PCLS programs can
be made to run as fast as their PSL counterparts - almost all uses of
lambda keywords are optimized away, and a type declaration/inference
optimizer replaces many function calls with efficient PSL equivalents.
PCLS has been used at Utah and elsewhere for about 6 months, and a number
of programs have been ported both to and from PCLS and other Common Lisps.
PCLS is being distributed along with an updated version of PSL (3.2a).
We require that you sign a site license agreement. The distribution fee
is $250 US for nonprofit institutions, plus a $750 license fee for
corporations. Full sources to both PSL and PCLS are included along with
documentation on the internals and externals of the system. At present,
we are distributing PCLS for 4.2/4.3 BSD Vax Un*x and for Vax VMS.
Releases for Apollo and Sun are anticipated soon, and versions for other
PSL implementations are likely. If interested, send your USnail address to:
Loretta Cruse
Computer Science Department, 3160 MEB
University of Utah
Salt Lake City UT 84112
cruse@utah-20.ARPA {seismo, ihnp4, decvax}!utah-cs!cruse.UUCP
Technical questions about PCLS, flames about absence of closures, etc
may be directed to shebs@utah-cs.ARPA, loosemore@utah-20.ARPA, or
kessler@utah-cs.ARPA.
------------------------------
End of AIList Digest
********************
∂23-May-86 1741 LAWS@SRI-AI.ARPA AIList Digest V4 #126
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 23 May 86 17:13:44 PDT
Date: Tue 20 May 1986 22:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #126
To: AIList@SRI-AI
AIList Digest Wednesday, 21 May 1986 Volume 4 : Issue 126
Today's Topics:
Opinion - AI Conference Size,
Seminars - Knowledge-Based Development of Software Systems (SU) &
Decision-Theoretic Heuristic Planning (SU) &
NanoComputers and Molecular Engineering (Xerox PARC) &
Palladio Exploratory Environment for Circuit Design (SU),
Conference - IJCAI-89 Site Selection and Officer Election &
Workshop on High-Level Tools
----------------------------------------------------------------------
Date: Tue 20 May 86 13:56:33-PDT
From: Peter Karp <KARP@SUMEX-AIM.ARPA>
Subject: On the growing size of AI conferences
What's all this fuss about organizing AAAI into seperate science and
engineering tracks to try to deal with the size of the conference? We
can hold the size down much more effectively by simply holding it and
IJCAI in Detroit every year.
------------------------------
Date: Wed 14 May 86 09:23:24-PDT
From: Christine Pasley <pasley@SRI-KL>
Subject: Seminar - Knowledge-Based Development of Software Systems (SU)
CS529 - AI In Design & Manufacturing
Instructor: Dr. J. M. Tenenbaum
Title: Knowledge-Based Development of Software Systems
Speakers: Lawrence Markosian
Douglas Smith
From: Reasoning Systems and Kestrel Institute
Date: Wednesday, May 14, 1986
Time: 4:00 - 5:30
Place: Terman 556
In the first part of the talk a knowledge-based approach
to the development of software systems is presented. In this approach
specifications written in a very-high-level, wide-spectrum language
are refined via transformations into efficient programs. Several
complex transformations for decomposing and refining specifications
are illustrated with examples.
In the second part of the talk an area of applied research - the
derivation of specifications from requirements will be discussed. A
case study in the requirements, specification, design and synthesis of
a simple communications system is presented. In the case study only
the step from specification to program is automated. It is then suggested
how the same technology used in automating that step can be used to
automate the derivation of the specification from requirements.
Visitors welcome!
------------------------------
Date: Fri 16 May 86 18:48:43-PDT
From: Larry Fagan <FAGAN@SUMEX-AIM.ARPA>
Subject: Seminar - Decision-Theoretic Heuristic Planning (SU)
A Decision-Theoretic Approach to Explaining Heuristic Planning
Curtis P. Langlotz
PhD Oral Exam
Medical Information Sciences
Stanford University
Thursday, May 22, 1:15 PM
Medical Center M-112
Many important planning problems are characterized by uncertainty
about the current situation and by uncertainty about the consequences
of future action. These problems also inevitably involve tradeoffs
between the costs and benefits associated with possible actions.
Decision theory is an extensively studied methodology for reasoning
under these conditions, but has not been explicitly and satisfactorily
integrated with artificial intelligence approaches to planning.
Likewise, many perceived practical limitations of decision theory,
such as problem solving results that are difficult to explain and
computational needs that are difficult to satisfy, can be overcome
through the use of artificial intelligence techniques. This thesis
explores the combination of decision-theoretic and artificial
intelligence approaches to planning, and shows that this combination
allows better explanation of planning decisions than either one alone.
In addition, the explicit representation of probabilities and
utilities allows flexibility in the construction of a planning system.
This means that assumptions made by such systems, which may be
critical for their performance, are more easily modified than in a
system that does not explicitly represent uncertainties and tradeoffs.
------------------------------
Date: 19 May 86 11:00 PDT
From: DMRussell.pa@Xerox.COM
Reply-to: DMRussell.pa@Xerox.COM
Subject: Seminar - NanoComputers and Molecular Engineering (Xerox PARC)
PARC Forum
May 22, 1986
3:45PM, PARC Auditorium
K. Eric Drexler
Research Affiliate, MIT Space Sciences Laboratory
NanoComputers and Molecular Engineering
The broad outlines of future technology will be set by the limits of
physical law, if we can develop means for approaching those limits.
Today, because we cannot directly manipulate atomic structures, we can
make no more than a fraction of the physical structures allowed.
Advances in biotechnology and computational chemistry are opening paths
to the development of molecular assemblers able to construct complex
atomic objects, making possible dramatic advances in the field. Among
these advances will be nanocomputers with parts of molecular size.
Mechanical nanocomputers are amenable to design and analysis with
available techniques. This technology promises sub-micron computers
with giga-hertz clock rates, nanowatt power dissipation, and RAM storage
densities in the hundreds of millions of terabytes per cubic centimeter.
This Forum is OPEN. All are invited.
Host: Dan Russell (Intelligent Systems Lab, 494-4308)
Refreshments will be served by the Ad Hoc Collective of Persons
Interested in Social Interchange (AHCPISI) at 3:30 pm.
------------------------------
Date: Tue 20 May 86 09:30:55-PDT
From: Christine Pasley <pasley@SRI-KL>
Subject: Seminar - Palladio Exploratory Environment for Circuit Design (SU)
CS529 - AI In Design & Manufacturing
Instructor: Dr. J. M. Tenenbaum
Title: Palladio: An Exploratory Environment for Circuit Design
Speakers: Harold Brown
From: Knowledge Systems Lab, Stanford University
Date: Wednesday, May 21, 1986
Time: 4:00 - 5:30
Place: Terman 556
The Palladio system was an early (1980-82) attempt to apply artificial
intelligence techniques to the design of electronic circuits. Palladio
was an exploratory environment for experimenting with circuit and system
design representations, design methodologies, and knowledge-based design
and analysis aids. It differed from other prototype design environments
in that it provided mechanisms for constructing, testing and incrementally
modifying or augmenting design languages and design tools.
Palladio had facilities for conveniently defining models of circuit or
system stucture and behavior. These models, called perspectives, were
similar to design levels in that the designer could use them to interactively
create and refine design specifications. Palladio provided an interactive
graphics interface for displaying and editing structural perspectives of
circuits or systems in a uniform, perspective-independent manner. A
declarative, temporal logic behavioral language with an associated
interactive behavior editor was used to specify designs from a behavioral
perspective. Further, a generic, event-driven symbolic simulator could
simulate and verify the behavior of a specified circuit or system from any
behavioral perspective and could perform hierarchical and mixed-perspective
simulations. Several experimental expert system design refinement and
analysis aids were implemented using the Palladio environment, for example,
a system which assigned mask levels to the interconnect in an NMOS circuit
which took into account the electrical characteristics of the levels as
well as design goals.
In this talk Prof. Brown will describe the Palladio system, its implementation
and some of the lessons learned about knowledge-based systems for
enginnering tasks.
Visitors welcome!
------------------------------
Date: Fri, 16 May 86 00:29:31 edt
From: walker@mouton.bellcore.com (Don Walker)
Subject: Conference - IJCAI-89 Site Selection and Officer Election
IJCAI-89 Site Selection and Officer Election
The Trustees of the International Joint Conferences on Artificial
Intelligence, Inc. are pleased to announce that IJCAI-89 will be held
20-26 August 1989 in Detroit, Michigan, USA. Wolfgang Bibel, Technical
University of Munich, has been elected Conference Chair; Sri
Sridharan, BBN Laboratories, has been elected Program Chair; and Sam
Uthurusamy of General Motors Research Laboratories has been appointed
to chair the Local Arrangements Committee. Don Walker, Bell
Communications Research, the IJCAII Secretary-Treasurer, will also
serve as Secretary-Treasurer for the conference.
IJCAI-89 will be cosponsored by the American Association for Artificial
Intelligence. All conference activities will be coordinated through
the AAAI Office by Claudia Mazzetti, Executive Director of the AAAI,
who will provide direct support for the IJCAI-89 Conference Committee.
In accordance with customary practice for IJCAI conferences held in
North America, the AAAI will also arrange the tutorial and exhibit
programs at the meeting.
For further information, contact one of the following:
Wolfgang Bibel (IJCAI-89)
Institut fuer Informatik
Technische Universitaet Muenchen
Postfach 202420
D-8000 Muenchen 2, West Germany
Telephone: (49-89)2105-2031
bibel%germany.csnet@csnet-relay
N. S. Sridharan (IJCAI-89)
BBN Laboratories
10 Moulton Street
Cambridge, MA 02238
Telephone: (1-617)497-3366
sridharan@bbng.arpa
R. Uthurusamy (IJCAI-89)
Computer Science Department
General Motors Research Laboratories
Warren, MI 48090, USA
Telephone: (1-313)575-3177
samy%gmr.csnet@csnet-relay
Donald E. Walker (IJCAI-89)
Bell Communications Research
445 South Street MRE 2A379
Morristown, NJ 07960, USA
Telephone: (1-201)829-4312
walker@mouton.arpa
Claudia Mazzetti (IJCAI-89)
AAAI Headquarters
445 Burgess Drive
Menlo Park, CA 94025
Telephone: (1-415)328-3123
aaai-office@sumex-aim.arpa
------------------------------
Date: Mon 19 May 86 12:18:30-EDT
From: Arun <Welch%OSU-20@ohio-state.ARPA>
Subject: Conference - Workshop on High-Level Tools
Call for Participation
WORKSHOP ON
HIGH LEVEL TOOLS FOR KNOWLEDGE BASED SYSTEMS
Sponsored by
The American Association for Artificial Intelligence (AAAI)
Laboratory for Artificial Intelligence Research
The Ohio State University (OSU-LAIR)
Defense Advanced Research Projects Agency (DARPA)
Columbus, Ohio
October 7-8, 1986
It has become increasingly clear to builders of knowledge based systems that no
single representational formalism or control construct is optimal for encoding
the wide variety of types of problem solving that commonly arise and are of
practical significance. The structures specific to diagnosis appear ill
adapted for use in design and planning tasks, and those for prediction seem
unsuitable for intelligent data retrieval. Thus there appears to be a need for
task-specific constructs at levels of organization above those of rules,
frames, and predicate calculus, and their associated control structures. In
addition to problem solving, there is a similar move for higher-level tools for
knowledge acquisition and explanation.
The objective of this workshop is to bring together theoreticians and builders
of knowledge based systems in order to explore the prospects for tools for
specifying structures at these higher levels. Presentations are invited on all
aspects of high level tools for knowledge-based systems, including (but not
restricted to) these topics:
- The powers and limitations of existing knowledge engineering tools
and techniques.
- Delineating the "natural kinds" of knowledge based problem solving
that can provide the basis for task specific tools.
- Matching AI techniques to tasks.
- Design proposals for high level knowledge engineering tools.
- Integrating task-specific tools into "toolboxes" for building systems
that perform complex problem solving tasks.
Four copies of an extended ABSTRACT (up to 8 pages, double-spaced) should be
sent to the workshop chairman before July 1, 1986. Acceptance notices will be
mailed by August 1. Revised abstracts should be returned to the chairman by
September 1, 1986, so that they may be bound together for distribution at the
workshop.
Workshop Chairman: Organizing Committee:
B. Chandrasekaran, William J. Clancey, Stanford University
OSU-LAIR Lee Erman, Teknowledge Inc.
Richard Fikes, Intellicorp
John Josephson, OSU-LAIR
Allen Sears, DARPA
For information and local arrangements, contact:
Charlie Huff Bev Mullet
(614) 422-0054 (614) 422-0248
EMail: Huff@Ohio-State.ARPA EMail: Mullet-B@Ohio-State.ARPA
Huff-C@Ohio-State.CSNET
{ihnp4,cbosgd}!osu-eddie!huff
Department of Computer and Information Science
The Ohio State University
2036 Neil Avenue
Columbus, OH 43210
------------------------------
End of AIList Digest
********************
∂23-May-86 2122 LAWS@SRI-AI.ARPA AIList Digest V4 #127
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 23 May 86 21:00:39 PDT
Date: Tue 20 May 1986 22:55-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #127
To: AIList@SRI-AI
AIList Digest Wednesday, 21 May 1986 Volume 4 : Issue 127
Today's Topics:
Queries - Expert Systems and Engineering Design & TMS Variables,
Games & Literature - Conway's Game of LIFE,
AI Tools - VAX VMS LISP & LISP Utilities &
AI, Graphics, and Simulation,
Expert Systems - ADL Personal Financial Planner,
Perception - Correction to Response to Nevin,
Literature - Research Indexes: Machine Learning 3,
Discussion Lists - Psychnet
----------------------------------------------------------------------
Date: 20 May 86 17:43:41 GMT
From: ernie.Berkeley.EDU!mazlack@ucbvax.berkeley.edu (Lawrence J. Mazlack)
Subject: Re: ES and ENGINEERING DESIGN
I am in the definition stage of my dissertation. I need help in identifying
what is being done and what has done. So, any pointers or information
would be greatly appreciated.
My general area is: ENGINEERING DESIGN AND EXPERT SYSTEMS
Within that, I am interested in: ANALOG CIRCUIT DESIGN
Thank you,
Mojgan Samadar pelejhn@ucccvm1.bitnet 508 Riddle Road, #46
Cincinnati, Ohio 45220
------------------------------
Date: Tue, 20 May 86 16:18:54 CDT
From: mklein@b.CS.UIUC.EDU (Mark Klein)
Subject: TMS Variables
I have been trying to decide whether or not to change some code I have (a
pattern-directed inference engine, justification-, logic-, and assumption-
based TMS's) to handle variables in the data. Variables do seem to add
more flexibility (specifically, universal and existential quantification, I
guess), but have some problems as well - for example, one can no longer
use open-coded unification to match up assertions to rule triggers (I think).
What I'd like to know is whether it is easy or hard to have variables
in data for each kind of TMS (it seems to be simple for
the justification-based TMS)? Is there any advantage to having variables in
data that you can't get simply by writing the appropriate rules (i.e. in
terms of efficiency, expressiveness, ability to control inference, etc)?
Thanks,
Mark Klein
------------------------------
Date: 20 May 86 18:43:12 EDT
From: kyle.wbst@Xerox.COM
Subject: re:Conway's game of LIFE
In AIList Digest V.4 #125, Steve Marsh requested info on any book ("not
too theoretical") on this subject.
I recommend The Recursive Universe, by William Poundstone (Morrow &
Co.), c.1985. It is a delightful non-mathematical treatment of
information theory, entropy relationships and other related issues
including a discussion of Conway's game of LIFE. It also includes a
listing of the game for use on an IBM PC (or equivalent).
Earle Kyle.
------------------------------
Date: Tue, 20 May 86 09:44:24 EDT
From: jbs@EDDIE.MIT.EDU (Jeff Siegal)
Reply-to: jbs@mit-eddie.UUCP (Jeff Siegal)
Subject: Re: VAX VMS LisP
In article <8605200626.AA27708@ucbvax.Berkeley.EDU> larry@JPL-VLSI.ARPA writes:
>Are there any Common LisPs for the VAX under VMS? (DEC's VAX LisP is an
>Ultrix product only, so far as I know.) ...
You have things backwards. VAX LISP is a VMS-only product.
Jeff Siegal
------------------------------
Date: Wed, 14 May 86 21:21:07 edt
From: beer%case.csnet@CSNET-RELAY.ARPA
Subject: VAX LISP Utilities
Here at the Center for Automation and Intelligent Systems Research (Case
Western Reserve University), we have developed a number of tools and utilities
for VAX LISP. They include extensions to the control structure and string
primitives, a simple pattern matcher, a pattern-based APROPOS facility, a
pattern-based top-level history mechanism, an extensible top-level driver,
an extensible top-level command facility, an extensible DESCRIBE facility,
and an implementation of Flavors. These facilities are described in detail
in a technical report, "CAISR VAX LISP Tools and Utilites" (TR-106-86).
The object code for these facilities is in the public domain. A tape
containing them may be requested by sending me mail at the address below.
There is a $35 charge to cover the cost of the tape and shipping and handling
costs. Currently, these facilities assume a VMS operating system environment
and require Version 2.0 of VAX LISP.
Randall D. Beer
(beer%case@CSNet-Relay.ARPA)
Center for Automation and Intelligent Systems Research
Case Western Reserve University
Glennan Bldg., Room 312
Cleveland, OH 44106
------------------------------
Date: Thu, 15 May 86 10:19 EDT
From: Paul Fishwick <Fishwick%upenn.csnet@CSNET-RELAY.ARPA>
Subject: AI, Graphics & Simulation
>> Date: Mon, 12 May 86 12:19:46 est
>> From: munnari!csadfa.cs.adfa.oz!gyp@seismo.CSS.GOV (Patrick Tang)
>> Subject: Graphics, Artifical Intelligence and Simulation
>>
>> Is there anyone out there come across any literature
>> describing the topics Graphics, Artifical Intelligence and
>> Simulation together. It seems to me that literatures on
>> these combined topics are VERY VERY scarce!!!
>>
There are a number of projects that have incorporated graphics, ai, and
simulation. Perhaps the largest project has been the STEAMER project which
incorporates layers of object abstractions for a steam plant. At Penn, we have
a multi-level simulation system (HIRES) that permits the construction and
interactive control of process abstraction layers. We also have a facial
animation system (OASIS) that incorporates local area expression simulations.
Both HIRES and OASIS utilize the Iris Workstation 2400 (Silicon Graphics, Inc.)
for real-time animation. Rand Corporation (Santa Monica, CA) has been involved
with object oriented simulation for quite some time (I think they have an
example graphical simulation of a battle scenario). You should also check out
the commercial enterprise, Pritsker Associates, who sell a graphical simulation
package. Some references are given:
1. Hollan James, Hutchins Edwin, Weitzman Louis - "STEAMER: An
Interactive Inspectable Simulation-Based Training System", AI
Magazine (Summer 1984).
2. Fishwick, Paul - "Hierarchical Reasoning: Simulating Complex
Processes over Multiple Levels of Abstraction", Ph.D Thesis, Univ.
of Pennsylvania, 1986 (MIS-CS-85-21).
3. Platt, Steve - "A Structural Model of the Human Face", Ph.D Thesis,
Univ. of Pennsylvania, 1985.
4. McArthur, David and Sowizral, Henry - "An Object-Oriented Language
for Constructing Simulations", IJCAI 1981.
An important issue with "AI and Simulation" is determining where the "ai" is in
simulation. The answer to that may best be found in the special workshop in AI
and Simulation to be held at AAAI-86. Even though graphics is not explicitly
mentioned, you should also check out the qualitative reasoning/simulation
literature (de Kleer, Forbus, Kuipers, and others) in past IJCAI/AAAI's. Also
"aggregation" is receiving wider attention these days: look at Goldin & Klahr
(IJCAI '81) and Weld (IJCAI '85).
-paul
CSNET: fishwick@upenn
------------------------------
Date: Mon, 19 May 86 19:08 EDT
From: Tom Martin <TJMartin@MIT-MULTICS.ARPA>
Subject: Correction to Spang-Robinson Summary
A message appeared in the AI List a few weeks ago that summarized the
Spang-Robinson report of April, 1985 -- Volume 2, No. 4
The Arthur D. Little personal financial planner, according to the
summary, "uses databases residing on IBM mainframe." In fact, what we've
done is develop a serial link that connects the Symbolics to an IBM 43xx
running VM/CMS via the IBM 3705 communications controller.
The point worth expanding on is that in this IBM/Symbolics symbiosis,
the IBM is in charge, not the Symbolics. Yes, the planner "uses
databases on the IBM mainframe," but only because the IBM user has (a)
requested it, and (b) has the right access.
The system uses the "virtual machine" concept of VM/CMS heavily. In
effect, the Symbolics appears to be a separate virtual machine running
on the 43xx. It responds to commands in real-time (instead of
uploading/downloading files), however, the commands have to be
pre-defined or rely on the EVAL on the Symbolics to return a meaningful
answer.
Neither the PFPS or this link are products of Arthur D. Little, Inc.,
as such. I hope to get a chance to talk about the IBM link at SLUG in a
few weeks.
Tom Martin Manager -- AI Systems Development
------------------------------
Date: Sat, 17 May 86 15:53:31 PDT
From: kube%cogsci@BERKELEY.EDU (Paul Kube)
Subject: correction to response to Nevin, AIList V4 #120
Instead of
Since the standard deviation of the difference of two
identically normally distributed random variables is twice the
standard deviation of either variable, the temporal disparity
measurement has 95% confidence interval of 4*10↑-6 seconds.
I should have said
Since the standard deviation of the difference of two
identically normally distributed random variables is less than
twice the standard deviation of either variable, the
temporal disparity measurement has 95% confidence interval
of less than 4*10↑-6 seconds.
The argument for getting to 4*10↑-7 second accuracy still holds.
Paul Kube
Berkeley EECS
kube@berkeley.edu
------------------------------
Date: Fri, 16 May 86 13:29:08 EDT
From: Bruce Nevin <bnevin@cch.bbn.com>
Subject: bad enough theory
Response to kube%cogsci@berkeley.edu (Paul Kube)
I suppose we ought to tell the authors of that fish story, and the editors
of Nature, that their experimental results have a trivial interpretation
in elementary signal detection theory. They were all struck by the
`impressive computational ability' of the central nervous system in these
animals. I'll leave it to you to disabuse them of that.
Your account succeeds precisely because it is not reductionist.
Thanks for clarifying what is going on there.
------------------------------
Date: 18 May 86 13:25 PDT
From: Shrager.pa@Xerox.COM
Subject: Research indexes: Machine Learning 3
Conference proceedings are often published as books for which the
authors have not taken the time to construct a proper index. The index
for the most recent machine learning book (ML3), is extremely poor. In
a book containing 77 articles, claiming to be a "guide to current
research", the index is clearly the most important part of the book. It
is a shame that this is really a valuable source of summary articles
because its success will lead the publisher and authors to believe that
they can sell poorly indexed books.
Although ML3 is a summary of current research, it is surely no guide. If
one's research is actually guided by consideration of the index to ML3,
then the authors and publisher have done a disservice to the field.
In this particular case, I am surprised that this book wasn't properly
indexed. The authors were asked to submit SCRIBE source for their
papers and could easily have been asked to also include index entries
either in the source, or selected from a list of keywords.
I encourage those who feel that they must buy ML3 to ignore the index,
and to write a letter to the authors author and publishers complaining
about its index. The 22 cents spent on stamps, and the hour spent
writing, are nominal with respect to the service you may be doing
yourself and the field in the future.
-- Jeff
------------------------------
Date: Mon, 12 May 86 21:58:25 pdt
From: George Cross <cross%wsu.csnet@CSNET-RELAY.ARPA>
Subject: Psychnet
[Ken - I think this may be of interest to some portion of the AI community.
It was on the Weizmann Bulletin Board]
PPPP IIII
PP II
PP PSYCHNET II
PP II
PP II
PP II
PSISSSSPSI
SSSS
SSSS
SSSS
SSSS
SSSS
SSSS
PSISSSSPSI
EDUCATIONAL PSYCHOLOGY DEPARTMENT
UNIVERSITY OF HOUSTON - UNIVERSITY PARK
4800 CALHOUN BOULEVARD
HOUSTON, TEXAS 77004
For More Information:
Norman Kagan, Ph.D. or
Distinguished Professor
Robert Morecock, M.A.
Graduate Assistant
713-749-7621
EPSYNET@UHUPVM1.BITNET
March 21, 1986
Selected professional papers scheduled for presentation at the
August APA convention soon will be available worldwide via
PSYCHNET, the new electronic bulletin board and mail machine at
the University of Houston.
Dr. Norman Kagan, distinguished professor and chair of the
Educational Psychology Department there said, "PSYCHNET will
enable psychologists to arrive with papers in hand, having read
and processed the presenters' ideas months in advance of the
convention. They will be ready to discuss these ideas rather than
just assimilate them at our annual meeting."
PSYCHNET, which will eventually serve the psychological
community in a number of ways, is distributed by BITNET, the
electronic mail network that links much of the academic world.
"This year PSYCHNET will be sending out papers for Divisions
12, 16, 17 and 38 to anyone on the BITNET system who requests
them. Next year we hope that even more divisions will
participate," continued Dr. Kagan, "and we are already encouraging
people to contact us via BITNET because PSYCHNET is up and
running."
"Most requests for papers will be automatically acknowledged
within five seconds," said Dr. Kagan. Actual arrival time of
requested papers will vary from five seconds to twenty minutes,
depending on how busy the mail system is. "We are hoping people
will go ahead and give PSYCHNET a try now, rather than wait till
the last minute."
For most computer sites with VM operating systems the command
TELL UH-INFO AT UHUPVM1 PSYCHNET HELP will start the process of
requesting PSYCHNET papers. Many VAX sites (JNET) will find the
command
SEND UH-INFO@UHUPVM1 PSYCHNET HELP
will obtain the same information. Others should consult
their local computer center regarding sending the message PSYCHNET
HELP to userid UH-INFO at node UHUPVM1.
Via leased lines BITNET makes PSYCHNET available at some 1203
nodes or university computer sites in the free world. Locations
range from Israel in the Middle East, to Europe, the United States
and Canada, then west as far as Tokyo in Japan.
Users of VM (BITNET) systems can obtain the psychnet
exec, which vastly simplifies using psychnet. Just give the command
TELL UH-INFO AT UHUPVM1 PSYCHNET SENDME PSYCHNET EXEC
When it arrives move it from your reader to your file area and then
give the command PSYCHNET. Your PSYCHNET use will now be automatic
------------------------------
End of AIList Digest
********************
∂27-May-86 0212 LAWS@SRI-AI.ARPA AIList Digest V4 #128
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 May 86 02:12:17 PDT
Date: Mon 26 May 1986 23:29-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #128
To: AIList@SRI-AI
AIList Digest Tuesday, 27 May 1986 Volume 4 : Issue 128
Today's Topics:
Seminars - Intelligent Systems on Multiprocessors (CMU) &
Information Retrieval by Text Skimming (CMU) &
Theory of Nested Transactions (CMU) &
Intuitionist Logic & Constructive Type Theory (MIT) &
Learning to Construct Abstractions (MIT) &
Non-monotonicity in Probabilistic Logic (SU) &
Autoepistemic Logic, Stratified Programs, Circumscription (SU) &
Sequentialising Logic Programs (SU)
----------------------------------------------------------------------
Date: 13 May 1986 1533-EDT
From: Theona Stefanis@A.CS.CMU.EDU
Subject: Seminar - Intelligent Systems on Multiprocessors (CMU)
PS SEMINAR
Name: R. Bisiani, CMU
Date: Monday, 19 May
Time: 3:30
Place: WeH 5409
DEVELOPING INTELLIGENT SYSTEMS ON MULTIPROCESSOR ARCHITECTURES
Our long-term goal is to develop a software environment that meets the need
of application specialists to build and evaluate heterogeneous AI
applications quickly and efficiently. Speech and vision systems are typical
of this kind of AI applications. In these systems, @i(knowledge-intensive)
and conventional programming techniques must be integrated while observing
real time constraints and preserving good programmability characteristics.
State-of-the-art AI environments solve some but not all of the problems
raised by the systems we are interested in. Therefore, we are developing a
set of tools, methodologies and architectures called Agora that can be used
to implement custom programming environments. Agora can be customized to
support the programming model that is more suitable for a given application.
Agora has been designed explicitly to support multiple languages and highly
parallel computations. Systems built with Agora can be executed on a number
of general purpose and custom multiprocessor architectures.
------------------------------
Date: 19 May 86 15:18:26 EDT
From: Michael.Mauldin@cad.cs.cmu.edu
Subject: Seminar - Information Retrieval by Text Skimming (CMU)
What: Thesis Proposal: Information Retrieval By Text Skimming
Who: Michael L. Mauldin (MLM@CAD)
When: May 29, 1986 At 3pm
Where: In Wean Hall 5409
Most information retrieval systems today are word based. But simple word
searches and frequency distributions do not provide these systems with an
understanding of their texts. Full natural language parsers are capable of
deep understanding within limited domains, but are too brittle and slow for
general information retrieval.
The proposed dissertation attempts to bridge this gap by using a text skimming
parser as the basis for an information retrieval system that partially
understands the texts stored in it. The objective is to develop a system
capable of retrieving a significantly greater fraction of relevant documents
than is possible with a keyword based approach, without retrieving a larger
fraction of irrelevant documents. As part of my dissertation, I will
implement a full-text information retrieval system called FERRET (Flexible
Expert Retrieval of Relevant English Texts). FERRET will provide information
retrieval for the UseNet News system, a collection of 247 news groups covering
a wide variety of topics. Initially FERRET will cover NET.ASTRO, the
Astronomy news group, and part of my investigation will be to demonstrate the
addition of new domains with only minimal hand coding of domain knowledge.
FERRET will acquire the details of a domain automatically using a script
learning component.
FERRET will consist of a text skimming parser (based on DeJong's FRUMP
program), a case frame matcher that compares the parse of the user's query
with the parses of each text stored in the retrieval system, and a user
interface. The parser relies on two knowledge sources for its operation: the
sketchy script database, which encodes domain knowledge, and the lexicon. The
lexicon from FRUMP will be extended both by hand and automatically with syntax
and synonym information from an on-line English dictionary. The script
database from FRUMP will be extended both by hand and automatically by a
learning component that generates new scripts based on texts that have been
parsed. The learning component will evaluate the new scripts using feedback
from the user, and retain the best performers for future use.
The resulting information retrieval system will be evaluated by determining
its performance on queries of the UseNet database, both in terms of relevant
texts not retrieved and irrelevant texts that are retrieved. Over six million
characters appear on UseNet each week, so there should be enough data to study
performance on a large database.
The main contribution of the work will be a demonstration that a text skimming
retrieval system can make distinctions based on semantic roles and information
that word based systems cannot make. The script learning and dictionary
access are new capabilities that will be widely useful in other natural
language applications.
------------------------------
Date: 19 May 1986 1619-EDT
From: Theona Stefanis@A.CS.CMU.EDU
Subject: Seminar - Theory of Nested Transactions (CMU)
PS SEMINAR
Date: Thursday, 22 May
Time: 3:30
Place: WeH 7220
Prolegomenon to the Theory of Nested Transactions
Michael Merritt
A. T. and T. Bell Laboratories
Murray Hill, New Jersey
"The possibility of a thing can never be proved merely from the
fact that its concept is not self-contradictory, but only through
its being supported by some corresponding intuition." Immanuel Kant
This talk develops the foundation for a general theory of nested
transactions. Not without trepidation, it presents yet another formal
model for studying concurrency and resiliency in a nested environment.
This model has distinct advantages over the many alternatives, the
greatest of which is the unification of a subject replete with
formalisms, correctness conditions and proof techniques. The speaker
is presently engaged in an ambitious project to recast the
substantial amount of work in nested transactions within this single
intuitive framework. The talk focuses on preliminary results
of that project--a description of the model, and its use in stating
and proving correctness conditions of a lock-based concurrent scheduler.
This is joint work with Nancy Lynch, of the
Massachusetts Institute of Technology.
------------------------------
Date: Fri 9 May 86 09:49:15-EDT
From: Susan Hardy <SH@XX.LCS.MIT.EDU>
Subject: Seminar - Intuitionist Logic & Constructive Type Theory (MIT)
Friday, May 9, l986
TALK 1: 10:00 a.m., TALK 2: 2:00 p.m.
2nd Floor Lounge
David Turner
University of Kent at Canterbury, England
TALK 1: Intuitionist Logic and Functional Programming
Intuitionism is a heretical school of mathematics founded by L.
E. J. Brouwer in l907. The most outstanding characteristic of
intuitionists is that they reject the use of Boolean logic. Recent
discoveries have shown that there is deep underlying connection
between intuitionist logic and functional programming. This discovery
is likely to have profound consequences for the future of both
subjects. The talk will attempt to explain from the beginning what
intuitionist logic is about and how the coincidence with functional
programming arises.
TALK 2: Constructive Type Theory as a Programming Language
Constructive type theory is a formal logic and set theory which has
been developed by Per Martin-Lof as a foundation for constructive (or
intuitionist) mathematics. Curiously, it can also be read as a
(strongly typed) functional programming language, with a number of
unusual properties, including that all well typed programs terminate.
The talk will give an overview of the main ideas in constructive type
theory from the point of view of someone trying to use it as a
programming language.
HOST: Professors Arvind and Rishiyur S. Nikhil
------------------------------
Date: Wed, 14 May 1986 11:16 EDT
From: JHC%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Seminar - Learning to Construct Abstractions (MIT)
-- AI Revolving Seminar --
LEARNING TO CONSTRUCT ABSTRACTIONS
Rick Lathrop
MIT AI Lab
One useful trait of an intelligent agent is to construct higher-level
abstractions from a mass of detailed low-level information. This talk
will explore one way an agent might be taught how to construct such
abstractions, and why it might be a useful or interesting for an agent
to do so. A main motivation is the possibility of the use of these
abstractions to see similarities (between situations) that are
obscured by the mass of irrelevant details at the lower level.
Preliminary examples from the Rieger (causal) mechanism world, VLSI
circuit analysis, and protein structure analysis will be discussed.
Thursday, May 15, 4pm
NE-43, 8th floor playroom
------------------------------
Date: 13 May 86 1511 PDT
From: Vladimir Lifschitz <VAL@SU-AI.ARPA>
Subject: Seminar - Non-monotonicity in Probabilistic Logic (SU)
NON-MONOTONICITY IN PROBABILISTIC LOGIC
Benjamin Grosof
Computer Science Department, Stanford University
Thursday, May 15, 4pm
MJH 252
I will discuss how to formalize the notion of non-monotonicity in
probabilistic reasoning, using the framework of Probabilistic Logic
(cf. Nils Nilsson). I will give some motivating examples of types of
non-monotonic probabilistic reasoning that seem to be found in
practice. There seems to be a relationship to default inheritance,
i.e. prioritized defaults of the type used in classic example of
whether birds and ostriches fly. Next, I introduce the idea of
maximizing conditional independence, which can be thought of as
maximizing irrelevance. This can be described more simply in terms of
non-monotonic reasoning on Graphoids (cf. Judea Pearl).
I conjecture that an important type of non-monotonicity in probabilistic
reasoning may be concisely expressed in terms of conditional
independence and Graphoids. Finally, I pose as an open question how
to formulate in the above terms the non-monotonic behavior of
maximizing entropy, a widely-used technique in probabilistic
reasoning.
------------------------------
Date: 19 May 86 1322 PDT
From: Vladimir Lifschitz <VAL@SU-AI.ARPA>
Subject: Seminar - Autoepistemic Logic, Stratified Programs,
Circumscription (SU)
AUTOEPISTEMIC LOGIC, STRATIFIED PROGRAMS AND CIRCUMSCRIPTION
Michael Gelfond and Halina Przymusinska
University of Texas at El Paso
Thursday, May 22, 4pm
MJH 252
In Moore's autoepistemic logic, a set of beliefs of a rational agent
is described by a "stable expansion" of his set of premises T. If this
expansion is unique then it can be viewed as the set of theorems which
follow from T in autoepistemic logic. Marek gave a simple syntactic
condition on T which guarantees the existence of a unique stable
expansion. We will propose another sufficient condition, which is
suggested by the definition of "stratified" programs in logic
programming. The declarative semantics of such programs can be
defined using fixed points of non-monotonic operators (Apt, Blair and
Walker; Van Gelder) or by means of circumscription (Lifschitz). We
show how this semantics can be interpreted in terms of autoepistemic
logic.
------------------------------
Date: Fri 23 May 86 11:33:27-PDT
From: Richard Treitel <TREITEL@SU-SUSHI.ARPA>
Subject: Seminar - Sequentialising Logic Programs (SU)
PhD oral examination
Tuesday June 3rd 1986 at 3 p.m.
Building 200, Room 34
"Sequentialising Logic Programs"
Richard Treitel
In "expert systems" and other applications of logic programming, the issue
arises of whether to use rules for forward or backward inference, i.e. whether
deduction should be driven by the facts available to the rule or the goals that
are put to it. Often some mixture of the two is cheaper than using either
exclusively. I show that, under two restrictive assumptions, optimal choices
of directions for the rules can be made in time polynomial in the number of
rules in a recursion-free logic program. If either of these restrictions is
abandoned, the optimal choice is NP-complete. I present a search algorithm for
the easiest of the cases so obtained.
A related issue is the ordering of the terms in a rule, which can have a strong
effect on the computational cost of using the rule. Algorithms for ordering
terms optimally are known, but all of them rely on the direction of inference
being fixed in advance, and they apply only to a single rule considered in
isolation. A more general algorithm is developed, and a way is shown to
incorporate it into the choice of rule directions. This also leads to an
NP-complete problem.
Some attention is paid to the model of execution cost for logic programs on
which these results are based. Logic programs involving recursion are not
covered by this work because no satisfactory cost model exists for them.
------------------------------
End of AIList Digest
********************
∂27-May-86 0439 LAWS@SRI-AI.ARPA AIList Digest V4 #129
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 May 86 04:39:08 PDT
Date: Mon 26 May 1986 23:35-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #129
To: AIList@SRI-AI
AIList Digest Tuesday, 27 May 1986 Volume 4 : Issue 129
Today's Topics:
Conferences - Workshop in High Level Tools &
Society for Philosophy and Psychology Annual Meeting &
Principles of Database Systems &
AAAI Automatic Programming Workshop
----------------------------------------------------------------------
Date: 19 May 86 16:20:51 GMT
From: cbosgd!apr!osu-eddie!welch@ucbvax.berkeley.edu (Arun Welch)
Subject: Conference - Workshop in High Level Tools
Call for Participation
WORKSHOP ON
HIGH LEVEL TOOLS FOR KNOWLEDGE BASED SYSTEMS
Sponsored by
The American Association for Artificial Intelligence (AAAI)
Laboratory for Artificial Intelligence Research
The Ohio State University (OSU-LAIR)
Defense Advanced Research Projects Agency (DARPA)
Columbus, Ohio
October 7-8, 1986
It has become increasingly clear to builders of knowledge based systems that no
single representational formalism or control construct is optimal for encoding
the wide variety of types of problem solving that commonly arise and are of
practical significance. The structures specific to diagnosis appear ill
adapted for use in design and planning tasks, and those for prediction seem
unsuitable for intelligent data retrieval. Thus there appears to be a need for
task-specific constructs at levels of organization above those of rules,
frames, and predicate calculus, and their associated control structures. In
addition to problem solving, there is a similar move for higher-level tools for
knowledge acquisition and explanation.
The objective of this workshop is to bring together theoreticians and builders
of knowledge based systems in order to explore the prospects for tools for
specifying structures at these higher levels. Presentations are invited on all
aspects of high level tools for knowledge-based systems, including (but not
restricted to) these topics:
- The powers and limitations of existing knowledge engineering tools
and techniques.
- Delineating the "natural kinds" of knowledge based problem solving
that can provide the basis for task specific tools.
- Matching AI techniques to tasks.
- Design proposals for high level knowledge engineering tools.
- Integrating task-specific tools into "toolboxes" for building systems
that perform complex problem solving tasks.
Four copies of an extended ABSTRACT (up to 8 pages, double-spaced) should be
sent to the workshop chairman before July 1, 1986. Acceptance notices will be
mailed by August 1. Revised abstracts should be returned to the chairman by
September 1, 1986, so that they may be bound together for distribution at the
workshop.
Workshop Chairman: Organizing Committee:
B. Chandrasekaran, William J. Clancey, Stanford University
OSU-LAIR Lee Erman, Teknowledge Inc.
Richard Fikes, Intellicorp
John Josephson, OSU-LAIR
Allen Sears, DARPA
For information and local arrangements, contact:
Charlie Huff Bev Mullet
(614) 422-0054 (614) 422-0248
EMail: Huff@Ohio-State.ARPA EMail: Mullet-B@Ohio-State.ARPA
Huff-C@Ohio-State.CSNET
{ihnp4,cbosgd}!osu-eddie!huff
Department of Computer and Information Science
The Ohio State University
2036 Neil Avenue
Columbus, OH 43210
------------------------------
Date: Thu, 22 May 86 00:15 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Conference - Society for Philosophy and Psychology Annual Meeting
Forwarded From: Scott Weinstein <Weinstein@UPenn> on Wed 21 May 1986 at 6:45
Program of the SOCIETY FOR PHILOSOPHY AND PSYCHOLOGY Annual Meeting
June 5 - 8, Johns Hopkins University, Baltimore, Md.
THURSDAY 1-3:30 pm.
TUTORIAL on Recent Work in Linguistics, Part I
Syntax: D. Lightfoot; Semantics: R. Jackendoff; Phonology: A. Prince
3:40-6 pm. (Concurrent Paper Sessions)
Empirical Investigations of Realism:
R. McCauley, T. McKay; A. Gopnik/ L. Forguson, L. McCune
Computation and Inference: C. Peacock, F. Eagen; M. Winston, L. Shelton
8-10:30 pm.
SYMPOSIUM on Machine Learning: C. Glymour, S. Weinstein, T. Mitchell; S. Harnad
FRIDAY 9-11:30 am.
SYMPOSIUM on Inferring Normal from Pathological Function
A. Caramazza, D. Caplan, T. Bever; B. von Eckardt
1-3:30 pm.
TUTORIAL on Recent Work in Lingustics II:
Language Acquisition: B. Landau; Neurolinguistics: E. Zuriff; Computer
Processing of Natural Language: M. Liberman
3:40-6 pm. (Concurrent Paper Sessions)
Perception: C. Hardin, G. Graham; P. Manfredi, J. Poland
Cognitive Ethology: C. Ristau, W. Bechtel; C. Hayes; R. Millikan
8-10:30 pm. SPECIAL INVITED SESSION on Consciousness and the Bicameral Mind:
Resolving the Problem of Dualism: J. Jaynes; D. Dennett, A. C. Catania
SATURDAY 9-11:30 am.
SYMPOSIUM on Connectionist Models and Neural Networks
T. Sejnowsky, P. Smolensky, D. Lloyd; P. Churchland
1-3:15 pm. (Concurrent Paper Sessions)
Emotions: R. Kraut, E. Lepore; L. Kopelman, K. Emmett
Induction, Formality and the Chinese Room:
P. Thagard, J. Bender; R. Double, R. Elugardo
3:25-5:45 pm.
SYMPOSIUM on Self Deception: R. Audi, K. Gergen, G. Rey; P. McLaughlin
8:30 pm. Presidential Address: F. Dretske
SUNDAY 9-11:30 am.
SYMPOSIUM on Intentionality and Information Theory
K. Sayre, D. Perlis, B. Loewer; R. van Gulick
**********************************************************************
REGISTRATION: G. Hatfield, Philosophy, Johns Hopkins U., Baltimore MD 21218
MEMBERSHIP: P. Kitcher, Philosophy, U. Minnessota, Minneapolis MN 55455
UUCP: princeton!mind!harnad
------------------------------
Date: Fri, 23 May 86 11:06:22 pdt
From: Moshe Vardi <vardi@diablo>
Subject: Conference - Principles of Database Systems
CALL FOR PAPERS
Sixth ACM SIGACT-SIGMOD-SIGART Symposium on
PRINCIPLES OF DATABASE SYSTEMS
San Diego, California, March 23-25, 1987
The conference will cover new developments in both the
theoretical and practical aspects of database and
knowledge-base systems. Papers are solicited which describe
original and novel research about the theory, design,
specification, or implementation of database and knowledge-
base systems.
Some suggested, although not exclusive, topics of interest
are: architecture, concurrency control, database and expert
systems, database machines, data models, data structures for
physical implementation, deductive databases, dependency
theory, distributed systems, incomplete information, user
interfaces, knowledge and data management, performance
evaluation, physical and logical design, query languages,
recursive rules, spatial and temporal data, statistical
databases, and transaction management.
You are invited to submit ten copies of a detailed abstract
(not a complete paper) to the program chairman:
Moshe Y. Vardi
IBM Research K55/801
650 Harry Rd.
San Jose, CA 95120-6099, USA
(408) 927-1784
vardi@ibm.com
Submissions will be evaluated on the basis of significance,
originality, and overall quality. Each abstract should 1)
contain enough information to enable the program committee
to identify the main contribution of the work; 2) explain
the importance of the work - its novelty and its practical
or theoretical relevance to database and knowledge-base sys-
tems; and 3) include comparisons with and references to
relevant literature. Abstracts should be no longer than ten
double-spaced pages. Deviations from these guidelines may
affect the program committee's evaluation of the paper.
The program committee consists of Umesh Dayal, Tomasz Imiel-
inski, Paris Kanellakis, Hank Korth, Per-Ake Larson,
Yehoshua Sagiv, Kari-Jouko Raiha, Moshe Vardi, and Mihalis
Yannakakis.
The deadline for submission of abstracts is October 10,
1986. Authors will be notified of acceptance or rejection
by December 8, 1986 (authors who supply an electronic
address might be notified earlier). The accepted papers,
typed on special forms, will be due at the above address by
January 9, 1987. All authors of accepted papers will be
expected to sign copyright release forms. Proceedings will
be distributed at the conference, and will be subsequently
available for purchase through ACM.
General Chairman: Local Arrangements:
Ashok K. Chandra Victor Vianu
IBM Research Center Dept. of Computer Science
P.O.Box 218 Univ. of California
Yorktown Heights, NY 10598 La Jolla, CA 92093
(914) 945-1752 (619) 452-6227
ashok%yktvmx@ibm.com vianu@sdcsvax.ucsd.edu
------------------------------
Date: Fri, 23 May 86 15:04:16 edt
From: als@mitre-bedford.ARPA (Alice L. Schafer)
Subject: Conference - AAAI Automatic Programming Workshop
The Scientific Workshop on Automatic Programming will be held
under the auspices of the AAAI conference in Philadelphia.
The purpose of this workshop is to gather the active researchers
in this field in order to share insights gained through implementation
and experimentation. Issues to be addressed include:
. What are the resistant problems in Automatic Programming?
. Are there metrics for comparing the conventional software
development approach to an APS?
. What should, and should not, be contained in a specification?
. What interaction is desired between the user and an APS?
. Are there basic building blocks that typify an APS?
The workshop will be held on Thursday August 14th, and will last
approximately three hours. The current plan is that one and a half hours will
be occupied by brief (seven minutes) presentations of current work, followed
by a panel discussion with active audience participation, moderated by
Tom Cheatham of Harvard. Due to the size of the available rooms, we
may have to limit the audience to researchers who have experience with
some aspect of the APS problem.
If you wish to present your current work or be on the panel you should
send us a 200-800 word abstract. The decision on who will participate will
be based on these abstracts. If you wish to participate as a member of the
audience instead, send us a short note containing a description of your work
or references to pertinent papers you have written. If we need to limit the
audience we will base our decisions on these responses.
Please post a printed copy of this notice at your workplace.
Organized by:
Alice Schafer Richard Brown Richard Piazza
(617) 271-2363 (617) 271-7559 (617) 271-2363
als@mitre-bedford.arpa rhb@mitre-bedford.arpa rlp@mitre-bedford.arpa
of the Knowledge-Based Automatic Programming Project (ISFI)
The MITRE Corporation
Mail Stop A-045
Burlington Road
Bedford, MA 01730
------------------------------
End of AIList Digest
********************
∂27-May-86 1346 LAWS@SRI-AI.ARPA AIList Digest V4 #130
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 May 86 13:46:08 PDT
Date: Tue 27 May 1986 09:21-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #130
To: AIList@SRI-AI
AIList Digest Tuesday, 27 May 1986 Volume 4 : Issue 130
Today's Topics:
Query - Lenat's AM,
Expert Systems - AM and CYRANO,
AI Tools - VAX/VMS LISP,
Games - Conway's LIFE & Int. Computer Chess Association Journal &
$1,000,000 Go Prize,
Humor - IT*S Grammar & Foo-Bar & Autonomous Systems
----------------------------------------------------------------------
Date: 20 May 86 14:06:59 GMT
From: ihnp4!houxm!whuxl!whuxlm!akgua!gatech!itm!danny@ucbvax.berkeley.edu
Subject: Need Ref for "Automated Mathematician" by Doug Lenat
I read a small article in the "IEEE Expert" magazine about Doug
Lenat's Doctoral dissertation at Stanford. He developed a program
called "AM" (for Automated Mathematician) that produced "interesting"
formulas/relationships about numbers.
I believe that there is a service which will reprint a thesis paper
for a fee, and USnail it. I've no idea the name of the service.
In short can anyone provide pointers to the thesis, or possibly,
any books which cover this or similar programs? Specifically, I wish
to learn about programs that deal with meta-rules and meta-meta-rules,
rather than rules.
Was that as clear as MUD?
Danny
--
Daniel S. Cox
({siesmo!gatech|ihnp4!akgua}!itm!danny)
------------------------------
Date: 24 May 86 01:43:40 GMT
From: allegra!princeton!caip!seismo!mcvax!ukc!reading!brueer!holte@ucbvax
.berkeley.edu (Robert Holte)
Subject: Re: AI in IAsfm
> In the June 86 issue of IAsfm ,there's a fascinating article on AI and
> common sense. In this article, the author mentions a program called
> Eurisko, ...
> How can I find out more about it?
>
> Steven Grady
Douglas Lenat has written and exercised two heuristic "discovery" programs,
AM and EURISKO.
His initial exploration of the problem of heuristic (mechanical)
discovery constituted his Ph.D. research and culminated in the somewhat
controversial program, AM.
After his thesis, Lenat analyzed the shortcomings and strengths
(or "sources of power" as he came to call them) of AM, and from
this analysis EURISKO was conceived.
Unfortunatley, since Lenat's move from Stanford to MCC (Austin) a year
or two ago, he has ceased working with EURISKO.
I am aware of one or two isolated, low-profile projects to build
EURISKO-like systems. Most notable is the work of Ken Haase at MIT
on a system called CYRANO -- but I don't think any of Haase's work
has yet been published.
-- Rob Holte
UUCP: ...!mcvax!ukc!brueer!holte
ARPANET, CSNET, JANET: holte@ee.brunel.ac.uk
Dept. of Electrical Engineering
Brunel University
England UB8 3PH
EURISKO References:
(Lenat is the sole or first author in all cases)
(1) Artificial Intelligence journal, vol.21, nos.1,2, 1983
(two articles: pp.31-59, pp.61-98)
MOST COMPREHENSIVE ACCOUNT AVAILABLE, includes a description
of the program and its applications
(2) Lenat's chapter (pp. 243-306) in the book "Machine Learning"
(volume 1), edited by R.S. Michalski, J.G. Carbonell, and T.M. Mitchell,
Tioga Press, 1983
(3) The AI Magazine, vol.3, no.3, 1982 (summer), pp.17-33
CONCENTRATES ON THE APPLICATION of Eurisko
to the discovery of new VLSI microcircuit structures
(4) SIGART Newsletter (ACM Special Interest Group on AI),
No. 79, 1982 (January), pp.16-17
Anecdotal account of EURISKO's success in designing a
"fleet" which won a national wargame tournament
(5) "Why AM and EURISKO Appear to Work", Lenat and J. Seely Brown,
Artificial Intelligence journal, vol.23, no.3 (1984), pp.269-294
AN INSIGHTFUL ANALYSIS of the success of Lenat's 2 programs
------------------------------
Date: Fri, 23 May 86 17:29:02 edt
From: Derrell Piper <ecsvax!hobbit%mcnc.csnet@CSNET-RELAY.ARPA>
Subject: Re: VAX VMS LisP
> Are there any Common LisPs for the VAX under VMS? (DEC's VAX LisP is an
> Ultrix product only, so far as I know.)
>
> If there's no (decent) Common LisP, what is the best choice?
>
> Larry @ jpl-vlsi.arpa
Digital does market a version of Lisp that runs under VMS. I have version
1.2 on a ninty-day trial license.
Derrell Piper
120 Rosenau Hall (201H)
School of Public Health
University of North Carolina - Chapel Hill
Chapel Hill, NC 27514 (919) 966-5106
Bitnet: derrell@uncsphvx.BITNET
Usenet: ...decvax!mcnc!ecsvax!hobbit
------------------------------
Date: Wed, 21 May 86 08:55 EST
From: RLH <HAAR%RCSMPA%gmr.com@CSNET-RELAY.ARPA>
Subject: RE: VAX VMS LISP
In AILIST 4-124, Larry@JPL-VLSI,ARPA asks about availability of
Common LISP on VAX/VMS.
I don't know where you got your information, but DEC sells a good
version of Common LISP that runs under VMS or microVMS. As far as
I have seen, it is a complete and faithful implementation with
some additions for accessing system routines and calling code
written in other languages.
There is also a package that DEC calls tha AI Workstation that
consists of a VAXstation, Common LISP, and some LISP software
to do window-oriented editing, etc. on the bit-mapped display
of the VAXstation. I haven't used this yet, so I cannot comment.
I have heard that there will be Flavors and Common LOOPS available
as well, but haven't seen any hard evidence of this.
DEC appears to be firmly committed to Common LISP (any DECies
care to comment?). They even use Guy Steele's book "Common LISP"
as part of the documentation.
Bob Haar
G. M. Research Labs
------------------------------
Date: 20 May 86 16:06:37 GMT
From: decwrl!pyramid!pesnta!phri!cmcl2!harvard!knight@ucbvax.berkeley.edu
Subject: Conway's LIFE
(My e-mail didn't work, so I am posting to the net...)
There is a very good, very recent book on LIFE called "The Recursive
Universe" by William Poundstone, c 1985, William Morrow and Company,
publishers. The book doesn't contain any original LIFE discoveries,
but rather presents the great bulk of work on LIFE in the context of
modern physics, computation, and recursion.
Kevin Knight
(knight@harvard)
------------------------------
Date: 22 May 86 17:57:28 GMT
From: tektronix!tekgen!stever@ucbvax.berkeley.edu (Steven D. Rogers)
Subject: RE: LIFE references
Another more general book that mentions the game of Life in
the broader context of games and life:
Laws of the Game, How the Principles of Nature Govern Change
by Manfred Eigen, and Ruthild Winkler, Harper Colophon Books
l981
It was sort of advertised as a "Godel, Escher, Bach" of games.
I don't think it quite made that level, but it is an interesting
book.
------------------------------
Date: 13 May 86 23:50:41 GMT
From: ihnp4!alberta!tony@ucbvax.berkeley.edu (Tony Marsland)
Subject: International Computer Chess Association Journal
The current (March 1986) issue of the ICCA Journal has been received.
Aside from the following three technical articles, there are reports
on Ken Thompson's 5-piece endgame studies, showing that several endgames
are won in more than 50 moves, plus the usual reviews and short
articles. There is also an extensive study of most commercially available
chess machines by a Swedish group. This list is the most accurate and
scientific estimate of the relative playing strength of those programs.
The major articles are
"A review of game-tree pruning" by T.A. Marsland
"An overview of machine learning in computer chess" by S.S. Skiena
"A data base on data bases" by H.J. van den Herik and I.S. Herschberg
Information on the availability of this journal has been posted before.
------------------------------
Date: 22 May 86 19:05:00 GMT
From: pur-ee!uiucdcs!kadie@ucbvax.berkeley.edu
Subject: $1,000,000 Prize
This might be of general interest:
/* May 17, 1986 by chen@uiucdcsb.CS.UIUC.EDU in uiucdcs:uiuc.ai */
/* ---------- "$1,000,000 for a program" ---------- */
The following was posted in net.game.go. In case you don't know about Go,
it is an ancient oriental board game played between two players
on a 19 by 19 grid. The best Go program so far is no better than an
intellegent novice that has received only one week intensive training.
/* May 14, 1986 by alex@sdcrdcf.UUCP in uiucdcsb:net.games.go */
/* ---------- "Million $ prize" ---------- */
I think this is a big news for the go community. The Chinese
Wei Chi(go in Chinese) Association(TWCA) in Taipei, Taiwan and conjunction
with one of Taiwan's largest computer company have put 2 million US
dollar in trust as prize money of computer go games. The top standing
prize is 1 million dollar for any computer go game defeating reigning
junior champion in Taiwan. The prize offer is good for 15 years.
(BTW, if you are wondering how they raise the prize money, take a look
at all the cheap IBM PC clones around.) The prize money is much more
interesting the Fredkin's prize. They are other prizes for the computer
go champion, etc.
The TWCA is the first organization offering prize money for
computer-computer and computer-human competition, according to my
and the computer go game pioneer Bruce, who appeared in TWCA first
computer tournament last January. Bruce lost twice and did not place
in top five. That tournament offered 2 to 3 thousand price money to the
winner. His first loss was to a go game written in BASIC running on
an Apple. Bruce was winning convincingly until the Apple games made
a suicide move which is legal under Chinese rule but not under Japanese
rule. Bruce's game went into loop. The judge allowed Bruce to fix his
code on the spot as long as he can make the move before his time clock
runs out. (They did not want Bruce to lose because he was the main
attraction, and I believe they paid him some appearance fee.) But
Bruce did not fix it right within the 30 minutes he had. I
did not stick around for his second loss. Bruce's game was running on
a 8MHz PC clone.
If you are interested in entering the next competition which
is in November, you better get the rule book on the Chinese rules, which
differ slightly from Japanese in area like suicide moves and scoring.
Last competition was restricted to personal computer, although I
find big disparity in computer power between a MacIntosh and an Apple
II. However, I don't think computing power is the main bottleneck right
now.
If there are enough people interested, I can get additional
detail about the tournament.
Also, a junior champion in Taiwan is about 1 dan in Chinese
amateur rating, which is about 5-6 dan in US and Japanese amateur
rating. Bruce's game was last rated to be 19Q in Japan human
tournament. He said he may push it to 11-12Q by November. I think Bruce
has got a good technique but his potential is limited by his knowledge of
go. But at any rate, you have your work cut out for you.
Alex Hwang
/* End of text from uiucdcsb:net.games.go */
/* End of text from uiucdcs:uiuc.ai */
------------------------------
Date: Thu 15 May 86 18:24:51-PDT
From: John Myers <JMYERS@SRI-AI.ARPA>
Subject: IT*S Grammar
Sir:
I am writing to protest the continual misuse of the word "its" for
the third person neuter posessive, when everyone knows its the contraction
for "it is". I's hair stands on end everytime I see someone use it in they's
sentence. Ive even heard one grammarian state that he's book says that
personal pronouns all have a special posessive case form that doesnt use
"apostrophe-S"--hes off he's rocker! Youre well aware whatll happen to
you's reading material if this becomes common. Were going to have to keep we's
guard up, until its clear that peopleve gotten this straight! Its dreadful!!
Not only that, some peoplere even forming they's contractions with
an apostrophe. When they have a word phrase such as "it is", and they want
to write it's contraction, they's spelling is "it's"!! Ill never see where
they couldve gotten such atrocious grammar from, when if theyre unsure of
how to use "its", they only have to look it's meaning up in they's dictionary!!
Instructor: "My word! Where's your grammar, boy?"
Youth: "Watching soap on the TV."
John Myers~~
------------------------------
Date: Thu 15 May 86 18:56:44-PDT
From: John Myers <JMYERS@SRI-AI.ARPA>
Subject: Etymology of Foo-Bar
Item of interest:
FUBAR was originally an acronym for "Fouled" Up Beyond All Recognition,
stemming from the W.W.II era. It is related to SNAFU, and such short-lived
acronyms as FUBIO, FUBISO, GFU, JANFU, MFU, SAMFU, SAPFU, SNEFU, SUSFU,
TARFU, and TUIFU. Source: A Dictionary of Euphemisms & Other Doubletalk, Rawson.
------------------------------
Date: 13 May 86 15:41:39 GMT
From: tektronix!uw-beaver!bullwinkle!rochester!rocksanne!sunybcs!ellie
!colonel@ucbvax.berkeley.edu (Col. G. L. Sicherman)
Subject: Re: Plan 5 for Inner Space
> Answers: about nine months, plus a few years training. And hospitals are
> charging on the order of $1000 now; but the care and feeding of the project
> will cost more. You do get a tax break.
Warning: the U.S. government no longer allows private ownership of these
units. Possession is permitted but subject to a long-term time limitation
which is determined on a case-by-case basis.
"Well, Doctor Eccles, how are the men feeling? Any cases of
frozen feet?"
"Duh, you didn't order any cases of frozen feet."
--
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: csdsicher@sunyabva
------------------------------
End of AIList Digest
********************
∂27-May-86 1719 LAWS@SRI-AI.ARPA AIList Digest V4 #131
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 27 May 86 17:17:10 PDT
Date: Tue 27 May 1986 09:36-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #131
To: AIList@SRI-AI
AIList Digest Tuesday, 27 May 1986 Volume 4 : Issue 131
Today's Topics:
Queries - Functional Programming and AI & Parallel Logic Programming &
Information Modeling for Real-Time/Asynch Processes
AI Tools - PROLOGs & Common LISPs & Common LISP Style Standards,
Expert Systems - Economics of Development and Deployment
----------------------------------------------------------------------
Date: 21 May 86 13:14:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Functional programming and AI
Here's a (dumb?) question for assorted AI wizards: how (if at all)
does functional programming support AI type applications?
By "functional programming", I mean the ability of a language
to treat functions (or some other embodiment of an algorithm) as
a data object: something that can be passed from one routine to
another, created or modified, and then applied, all at run-time.
Lisp functions are an example, as is C←Prolog's ability to
construct predicates from lists with the =.. operator, and the
OPS5 "build" action.
Do working AI programs really exploit these features a lot?
Eg, do "learning" programs construct unforeseen rules, perhaps
based on generalization from examples, and then use the rules?
Or is functional programming just a trick that happens to be
easy to implement in an interpreted language?
Thanks for any thoughts on this...
John Cugini <Cugini@NBS-VMS>
Institute for Computer Sciences and Technology
National Bureau of Standards
------------------------------
Date: 25 May 86 14:26:49 GMT
From: wisdom.BITNET!jaakov@ucbvax.berkeley.edu (Jacob Levy)
Subject: Parallel Logic Programming
Dear fellow AIListers and PrologListers,
I'm interested in obtaining the latest references you may have to articles
concerned with Parallel Logic Programming languages. If you have recently
written an article concerned with parallel execution of Prolog or about a
committed-choice non-deterministic LP language, I'm interested to read it,
or at least to receive a pointer to the article. By RECENT I mean articles
which have been published in 1985 and 1986 or which are about to appear. I
am interested in any and all sub-topics of the fields listed above.
Thank you very much ahead of time for your response,
Rusty Red (AKA Jacob Levy)
BITNET: jaakov@wisdom
ARPA: jaakov%wisdom.bitnet@wiscvm.ARPA
CSNET: jaakov%wisdom.bitnet@csnet-relay
UUCP: (if all else fails..) ..!ucbvax!jaakov%wisdom.bitnet
------------------------------
Date: 24 May 86 00:10:04 GMT
From: amdcad!cae780!leadsv!rtgvax!ramin@ucbvax.berkeley.edu
Subject: Information Modeling for Real-Time/Asynch processes
Sorry about all the cross-postings but I'm trying for the widest
circulation short of net.general (:-)
I am looking for any pointers to literature/specifications/ideas for
Modeling of asynchronous and/or real-time systems. These would be
very high-level design specification tools to help model parallel
real-time events and systems.
Intuitively, at least I think the way to go is Temporal Logics (hence
the net.philosophy posting...) however, that seems to be currently applied
only to hardware design (CIRCAL et al).
The problem with the standard dataflow diagram and associated descriptive
systems is their failure to capture at least simultaneous (ideally, parallel)
events.
On the other hand, the rigor with which one would want to model such an event
lends itself to creative Knowledge Representation techniques (hence
net.ai and net.cog-eng...) and even possibly many-valued logics...?
To put it in some more perspective, the model would be of some complicated
industrial processes that up to now have been modeled in a synchronous
i.e. serialized fashion. I would like to see if there are any references
out there to attempts at asynchronous modeling. Would definitely repost
(to where? (:-) if there are enough responses...
Thanks much...
ramin
: alias: ramin firoozye' : USps: Systems Control Inc. :
: uucp: ...!shasta \ : 1801 Page Mill Road :
: ...!lll-lcc \ : Palo Alto, CA 94303 :
: ...!ihnp4 \...!ramin@rtgvax : ↑G: (415) 494-1165 x-1777 :
------------------------------
Date: 16 May 86 10:53:22 GMT
From: allegra!mit-eddie!genrad!panda!husc6!harvard!seismo!mcvax!ukc!kcl-cs
!glasgow!dunbar@ucbvax.berkeley.edu (Neil Dunbar)
Subject: Re: looking for Prolog
> I'm looking for a version of Prolog. The machines available to me
> include an AT&T 7300 (Unix PC), AT&T 3B5, AT&T 3B2, Plexus P/60, Plexus
> P/35, IBMPC, and AT&T 6300PC (IBMPC compatible). I've spoken with
> someone from AT&T who suggests that Quintus may be porting to the 7300.
> I've spoken with someone from Quintus who says there is no port and no
> contract at this time. I've heard of something called C-Prolog, but
> don't know for sure what it is. ...
Don't Borland make a version of Prolog to run on the PC, Turbo Prolog?
If you want a compiler there is the Arity compiler, again for MS-DOS systems,
but it costs a few thousand (dollars or pounds, depending on which side of
the Atlantic you're on).
CProlog V1.2 is the current prolog interpreter system from the University of
Edinburgh, running on our 11/780 under Unix. I don't know if it can be ported
onto the machines you describe, but you never know, anything's possible. If
you want to learn Prolog, try Clocksin & Mellish "Programming in Prolog",
which is an excellent tutorial guide.
Hope this helps,
Neil Dunbar.
------------------------------
Date: Sat, 24 May 86 02:35:00 +0200
From: enea!zyx!jeg@seismo.CSS.GOV
Subject: Re: Logic/Functional Languages?
In article <8605200626.AA27699@ucbvax.Berkeley.EDU> you write:
>Does anyone on the list know of available languages incorporating both
>logic and functional programming (preferably in a Unix 4.2 environment
>or possibly an IBM/PC)? ...
Answer to the questions:
1.) Does anyone on the list know of available languages incorporating both
logic and functional programming...?
2.) Some version of Prolog embedded within Common Lisp...?
3.) Has anyone produced any large applications with these hybrid systems?
Are the benefits derived from the systems *significant* (over using,
say, vanilla lisp or prolog)?
Hewlett-Packard have informally introduced HP Prolog to some customers and the
official introduction is scheduled to be sometime in August.
HP Prolog is residing on top of HP Common Lisp and this development environment
is therefore incorporating both Common Lisp and Prolog. Since I am affiliated
with HP, the following information is biased and might sound like an
advertisment, but I'll try to answer the third question without breaking
to many ethical rules for the net.
HP Development Environment is based on HP-UX (Unix V.2) and HP 9000 series
300, a 68020 based machine, with HP:s window system.
Top level for the Development Environment:
- A complete EMACS editor with some enhancements.
- A general browser.
Main features with the Development Environment are:
- The high level of integration
- The ability to use both Common Lisp and Prolog in the same process
and on the same objects and to mix Common Lisp and Prolog code.
HP Common Lisp has:
- Interpreter and compiler
- Objects package
- Ability to call C/Pascal/Fortran
- Debugger
- Interrupt handler
HP Prolog consists of two different environments:
- A "Common Lisp compatible" S-expression syntax
- Edinburgh C-Prolog syntax
HP Prolog has:
- Interpreter
- Incremental compiler
- Block optimzing compiler
- Debugger
Main features of HP Prolog are:
- A much extended Prolog
- Ability to mix Prolog and Common Lisp
- Macros
- Packages
- Mode declarations
- Declarative determinism
- Integration in the environment
- A well-designed and complete I/O system
- Other minor features like strings, graphics etc.
- An extended Definite Clause Grammar (DCG)
- Respectable performance
The Prolog system will soon be available with/without Common Lisp system
on other vendors machines.
Quite large applications on this system are currently under development.
There is definitely a significant advantage of being able to mix Common
Lisp and Prolog. Common Lisp and Prolog have both different advantages
and complement instead of excluding each other.
Jan-Erik Gustavsson, ZYX AB, Styrmansgatan 6, 114 54 Stockholm, Sweden
Phone: + 46 - 8 - 65 32 05
...mcvax!enea!zyx!jeg
------------------------------
Date: 18 May 86 00:52:32 GMT
From: allegra!mit-eddie!genrad!panda!husc6!harvard!caip!lll-crg!seismo
!mcvax!enea!kuling!martin@ucbvax.berkeley.edu (Erik Martin)
Subject: Re: Common LISP style standards.
In article <2784@jhunix.UUCP> ins←amrh@jhunix.UUCP writes:
>
> - How do you keep track of the side effects of destructive functions
> such as sort, nconc, replaca, mapcan, delete-if, etc?
Don't use them. I use destruction only when I need circular objects or
when I need to speed up a program. In the latter case I write it strictly
functional first and then substitute 'remove' with 'delete' and so on. This
should not affect the semantics of the program if it is 'correctly' written
from the beginning. But it's really a task for the compiler so You shouldn't
need to think about it.
> - When should you use macros vs. functions?
I only use macros when I need a new syntax or a 'unusuall' evaluation
of the arguments. (like FEXPR in Franz and MacLisp.)
> - How do you reference global variables? Usually you enclose it
> in "*"s, but how do you differentiate between your own vars and
> Common LISP vars such as *standard-input*, *print-level*, etc?
Always "*"s. No differentiation.
> - Documentation ideas?
An 'overview' description in the file header, more detailed on top of each
function. Very few comments inline, use long function and variable names
instead. Documentation strings in global variables and top level (user)
functions.
> - When to use DOLIST vs MAPCAR?
Quite obvious. Use DOLIST when you want to scan through a list, i.e. just
look at it. At the end of the list it returns NIL or the optional return form.
You can also return something with en explicit RETURN. Use MAPCAR when you
want to build a *new* list with a function applied to each element.
> - DO vs LOOP?
Write what you mean. If you mean 'repeat until doomsday' (without any
variables bound) then use LOOP.
> - Indentation/format ideas? Or do you always write it like the
> pretty-printer would print it?
A lot of white space in the code. The rest is very personal and hard to set
up rules for. Nice editors usually have good ideas about how it should look.
> - NULL vs ENDP, FIRST vs CAR, etc. Some would say "FIRST" is
> more mnemonic, but does that mean you need to use
> (first (rest (first X))) instead of (cadar X) ??
Again, write what you mean. If you mean 'is this the end of the list
we are just working with?' then use ENDP, if you mean 'is this NIL (an empty
list)?', use NULL, and if you mean 'is this false?' use NOT.
Write FIRST if you mean the first element of a list, SECOND for the second,
THIRD for the third...and combinations of these when appropriate. At some
limit this gets very messy though, and C*R is better. But in that case you
perhaps should write your own accessor functions. When working with cons'es
I always use CAR and CDR.
My general rule is : Write what you mean and leave the task of efficiency
to the implementation and compiler.
Per-Erik Martin
--
Per-Erik Martin, Uppsala University, Sweden
UUCP: martin@kuling.UUCP (...!{seismo,mcvax}!enea!kuling!martin)
------------------------------
Date: Sat, 24 May 86 08:23:25 est
From: munnari!psych.uq.oz!ross@seismo.CSS.GOV (Ross Gayler)
Subject: economics of expert systems - summary of replies
A while back I put out a request for information on the economics of the
development and deployment of expert systems. This is a summary of the replies
I have received.
I received around ten replies, most of which were of the 'please let me know'
variety. Some of these went to some length to indicate that they felt this
was an important area. It does seem that there is a need for this information
and it either doesn't exist or somebody is not sharing it.
There were three substantive replies which told of:
1 A company which attempted to develop three expert systems.
One took twice as long to develop as the FORTRAN program it replaced,
the second was too slow to be usable, and the other was abandoned for
lack of an expert.
2 A successful family of expert systems that are widely used in-house.
The point made here was that the development cost was an insignificant
fraction of the cost of packaging the product for deployment and the
continuing cost of training the users.
3 A pointer to the November 1985 IEEE Transactions on Software
Engineering which was a special issue on "Artificial intelligence and
software engineering".
I found the articles by Doyle, Bobrow, Balzer, and Neches et al to be
the most relevant to my needs. Doyle argues that the productivity
advantage of the artificial intelligence approach comes from the tools
and techniques used to construct the product, not from the ultimate
form of the product itself. The other papers do not explicitly address
the modelling of costs. However, an implicit model is discernible from
the areas they choose to emphasize.
I will send a request to the software engineering list and see if I can get any
joy there. If not it looks like I might be forced to do some work for myself.
What I would like is a predictive model which will give me the costs to
implement and deploy an expert system or conventional system as functions of
various features of the problem, the tools available, and the development and
deployment environments. As I do not have any empirical data the best I can
aim for is a set of statements on the qualitative shapes of the cost curves for
various factors. Using these curves backwards would allow me to say what
problem characteristics are a lot more conducive to an expert system solution
being cheaper than a conventional solution. I will probably start with the cost
models in Tom de Marco's book, "Controlling software projects" and try to
identify expert systems analogues of the cost factors he identifies for
conventional systems.
If I manage to get anywhere with this I will let you know.
Ross Gayler | ACSnet: ross@psych.uq.oz
Division of Research & Planning | ARPA: ross%psych.uq.oz@seismo.css.gov
Queensland Department of Health | CSNET: ross@psych.uq.oz
GPO Box 48 | JANET: psych.uq.oz!ross@ukc
Brisbane 4001 | UUCP: ..!seismo!munnari!psych.uq.oz!ross
AUSTRALIA | Phone: +61 7 227 7060
------------------------------
End of AIList Digest
********************
∂28-May-86 1319 LAWS@SRI-AI.ARPA AIList Digest V4 #132
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 May 86 13:19:04 PDT
Date: Wed 28 May 1986 09:59-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #132
To: AIList@SRI-AI
AIList Digest Wednesday, 28 May 1986 Volume 4 : Issue 132
Today's Topics:
Queries - AI Survey & AI Applications in Simulation & Neural Networks,
Brain Theory - Chaotic Neural Networks,
Logic Programming - Functional Programming & Prolog Variables,
AI Tools - VAX LISP on VMS and ULTRIX,
Binding - Sussex Cognitive Studies,
Literature - Object-Oriented Programming Book,
Psychology - Doing AI Backwards
----------------------------------------------------------------------
Date: 23 May 86 15:32:39 GMT
From: mcvax!ukc!reading!onion.cs.reading.AC.UK!scm@SEISMO (Stephen Marsh)
Subject: A survey on AI
I am currently doing a survey on the attitudes and
beliefs of people working in the field of AI. It would be
very much appreciated if you could take the time to save this
notice, edit in your answers and post me back your reply.
If there are any interesting results, I'll send them
to the net sometime in the future.
-Thanks
1. Do you, or have you, undertaken any research in the field
of Artificial Intelligence?.....
2. In which country was the research undertaken?.....
3. For how long did your research continue?.....
4. If you are not currently working in the field of AI, when
was the period of your research?.....
5. What area of research did your work cover? (eg IKBS).....
6. Were you satisfied with the results of your research?....
7. Did your research make you feel that in the long term AI was
not going to succeed in creating an intelligent machine?..
8. Do you find the progress of research in AI in the last
5 years?......
10 years?.....
25 years?.....
acceptable?
9. What do you consider the main objectives of AI?.....
10. Excluding financial pressures, do you consider that AI
researchers should reconsider the direction of their
work?.....
11. Do you consider that the current areas of research will
eventually result in an 'intelligent' machine?.....
12. Do you consider that the current paradigm of humans producing
cleverly-written computer programs can ever fulfil the
initial aim of AI of producing an intelligent machine in the
accepted sense of the word 'intelligent'?.....
13. Should a totally new approach to producing an intelligent
machine be found, not based simply on sets of sophisticated
programming techniques?.....
scm@onion.cs.reading.ac.uk
Steve Marsh
Dept of Computer Science,
PO Box 220,
University of Reading,
Whiteknights,
READING ,UK.
------------------------------
Date: 23 May 86 05:12:27 GMT
From: shadow.Berkeley.EDU!omid@ucbvax.berkeley.edu (Omid Razavi)
Subject: AI applications in simulation
I am interested in the applications of AI in simulation.
Specially, I'd like to know if there are expert system environments
today that would support simulation modeling and provide features
similar to those of standard simulation languages such as GASP IV
and SIMSCRIPT.
Also, references to technical articles related to this subject is
greatly appreciated.
Omid Razavi
omid@shadow.berkeley.edu
------------------------------
Date: 17 May 86 14:39:34 GMT
From: hplabs!qantel!lll-lcc!lll-crg!seismo!mcvax!ukc!warwick!gordon@ucbvax
.berkeley.edu
Subject: Re: neural networks
This may be a bit of a tangent, but I feel it might have some impact on
the current discussion.
The mathematical theory of chaotic systems is currently an active area of
research. The main observation is that models of even very simple systems
become chaotic in a very small space of time.
The human brain is far from being a simple system, yet the transition to
chaos rarely occurs. There must be a self-correcting element within the
system itself, as it is often perturbed by myriad external stimuli.
Is the positive feedback mentioned in article <837@mhuxt.UUCP> thought to
be similar to the self-correcting mechanisms in the brain?
Gordon Joly -- {seismo,ucbvax,decvax}!mcvax!ukc!warwick!gordon
------------------------------
Date: 23 May 86 14:51:53 GMT
From: hplabs!hplabsc!kempf@ucbvax.berkeley.edu (Jim Kempf)
Subject: Re: neural networks
> The mathematical theory of chaotic systems ...
> Gordon Joly -- {seismo,ucbvax,decvax}!mcvax!ukc!warwick!gordon
Not having seen <837@mhuxt.UUCP>, I can't comment on the question.
However, I do have some thoughts on the relation between chaos
in dynamical systems and the brain. The "chaotic" dynamical behavior
seen in many simple dynamical systems models is often restricted
to a small region of the state space. By a kind of renormalization
procedure, this small region might be topologically shrunk, so that,
from a more macroscopic view, the chaotic region actually looks
more like a point attractor. Another possibility is that complex
systems like the brain are able to perform a kind of ensemble
averaging to filter out chaos. Sorry if this sounds like speculation.
Jim Kempf kempf@hplabs
------------------------------
Date: Tue, 27 May 86 18:10:25 PDT
From: narain@rand-unix.ARPA
Subject: Functional and Logic Programming
Reply to Paul Fishwick regarding a language which incorporates both
functional and logic programming, (AIList digest v.4 #124.):
In a recent paper "A technique for doing lazy evaluation in logic" I describe
a method of defining functions in a logic-based language such as Prolog.
It is shown how we can keep Prolog fixed, but define functions in such
a way that their interpretation by Prolog directly yields lazy evaluation.
This contrasts with conventional approaches for doing lazy evaluation
which keep the programming style fixed but modify the underlying
interpreter.
More generally the technique can be viewed as a natural and efficient
method of combining functional and logic programming. The paper appeared
in 1985 IEEE Symposium on Logic Programming, and a substantially expanded
version of it is to appear in the Journal of Logic Programming.
Sanjai Narain
Rand Corp.
------------------------------
Date: 22 May 86 07:46:51 GMT
From: amdcad!lll-crg!booter@ucbvax.berkeley.edu
Subject: Prolog and Thank you
WOW! I didn't realize so many folks out there have played with prolog.
I received all sorts of replies, most very useful in explaining the
instantiation of variables to values (I hope I worded it properly). PASCAL
doesn't prepare you for it and I write LISP code by the grace of God(it just
works, I dunno why!).
A major problem I had was in the idea of reconsulting a file. I just kept
loading copies of files in there and of course would get the same error
message as it seemed to be reading the first one over and over.
I have passed that phase now and am endeavoring to master the idea of using
the "cut". You'd all be proud of me, I wrote a very simple version of the
computer that talks back (called "doctor" or "eliza").
I still like LISP better, but at least I am no longer swearing at the terminal.
Thank you all very much
E
*****
------------------------------
Date: 27 May 86 15:48:00 EST
From: "LOGIC::ROBBINS" <robbins%logic.decnet@hudson.dec.com>
Reply-to: "LOGIC::ROBBINS" <robbins%logic.decnet@hudson.dec.com>
Subject: VAX LISP is supported on both VMS and ULTRIX
VAX LISP V2.0 (DEC's current release of Common Lisp) is supported on
VMS and ULTRIX. I hope that this clears up any confusion resulting from
two incorrect messages that appeared in this list recently concerning
VAX LISP.
Rich Robbins
Digital Equipment Corporation
77 Reed Rd. HL02-3/E09
Hudson, MA 01749
Arpanet: Robbins@Hudson.Dec.Com
------------------------------
Date: Thu, 22 May 86 08:39:30 gmt
From: Aaron Sloman <aarons%cvaxa.sussex.ac.uk@Cs.Ucl.AC.UK>
Subject: Sussex Cognitive Studies mail address
This is to confirm that the Sussex Cognitive Studies Netmail address has
finally(?) settled down to UK.AC.SUSSEX.CVAXA.
Arpanet users can try:
aarons@cvaxa.sussex.ac.uk (UK uses the reverse of ARPA order)
or, if that doesn't work:
aarons%uk.ac.sussex.cvaxa@ucl-cs
or
aarons%uk.ac.sussex.cvaxa@cs.ucl.uk.ac
or via UUCP: ...mcvax!ukc!cvaxa!aarons
Other users at this address include Chris Mellish (chrism),
Margaret Boden(maggieb), Ben du Boulay (bend), Jim Hunter (jimh),
Gerald Gazdar(geraldg), John Gibson (johng), David Hogg (daveh),
and the new POPLOG Project manager Alan Johnson (alanj).
Aaron Sloman
------------------------------
Date: Tue 13 May 86 18:37:50-PDT
From: Doug Bryan <Bryan@SU-SIERRA.ARPA>
Subject: object-oriented programming books
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
Brad Cox's book "Object-Oriented Programming: An Evolutionary Approach"
is now out. The book is published by Addison Wesley.
doug
------------------------------
Date: 18 May 86 05:39:39 GMT
From: ernie.Berkeley.EDU!tedrick@ucbvax.berkeley.edu (Tom Tedrick)
Subject: Doing AI backwards (from machine to man)
More on Barry Kort's "Problem of the right-hand tail"
(ie social persecution of those with high intelligence).
Here is the way I look at the problem.
In order to function in society, it is necessary for most individuals
to operate in a more or less routine manner, performing certain acts
in a repetitive manner.
I have been trying to work backwards from models of computation,
abstracting certain principles and results in order to obtain
models with a wider application, including social behavior.
This is somewhat the reverse direction from that taken by
those working in Artificial Intelligence, who study intelligent
behavior in order to find better ways for machines to function.
I am studying how machines function in order to find better
ways for humans to function.
Anyway, most people in society functioning more or less automatically,
they handle input in such a way that only information relevant to
their particular problems is assimilated. Input is interpreted
according to the pre-existing patterns in their minds. It is as
if it was formatted input in fortran, anything that doesn't
conform to certain patterns is interpreted nonsensically.
The people in the "right-hand tail", IQ distribution-wise,
are there primarily due to greater capacity for independent
thought, abstract thought, capacity to reason for themselves
(or so I claim).
Thus these individuals are more likely to have original ideas
which don't conform to the pre-existing patterns in the minds
of the more average individuals. The average individual will
become disturbed when presented with information which he
cannot fit into his particular format. And with good reason,
since his role is to function as an automaton, more or less,
he would be less efficient if he spent time processing information
unrelated to his tasks.
So by presenting original information to the average individuals
in society, the "rightie" is likely to be attacked for disturbing
the status quo.
To use the machine analogy, the "righties" are more like programmers,
who alter the existing software, where the "non-righties" are like
machines which execute the instructions they already have in storage.
The analogy can be pushed in various ways. We can think of each
individual as being both programmer and machine, the faculty of
independent judgement and the self being the programmer or system
analyst, while the brain is the computing agent to be programmed.
The individual is constantly debugging and rewriting the code for
his brain, by the choices he makes which become habits, and so on.
Also, in interactive protocols where various individuals exchange
information, each is tampering with the software of the other.
I currently have been working out a strategy for dealing with
those I live with who talk too much. It is like having a machine
which keeps spewing out garbage every time you give it some input.
My current strategy is to carry a little card saying "I am observing
silence. I will answer questions in writing." This seems to work
very well, it is as if this form of input goes through another
channel which does not stimulate so much garbage in response.
Or its like saying "the network is down today, so sorry."
One last tangent. Note that in studying models of computation
one of the primary costs is the cost of memory. We can turn
this observation to good use in studying human behavior. For
example, suppose your wife asks you to pick up some milk at
the store after work. This seems a reasonable enough request,
on the surface. But if you think of the cost in terms of memory,
suppose short term memory is extremely limited and you have to
keep the above request stored in short term memory all day.
In effect you are reducing your efficiency in all the tasks
you perform all day long, since you have less free space in
your short term memory. Thus we see again how women have a
brilliant gift for asking seemingly innocent favors which
are really enormously costly. The subtle nature of the problem
makes it difficult to pin down the real poison in their approach.
[Anything held in short-term memory for five seconds automatically
enters long-term memory as well. If the man chooses to keep
refreshing it in STM, perhaps due to poor LTM retrieval strategies,
he needs to take a course in memory techniques -- it's hardly
the woman's fault. -- KIL]
You can use various strategies in order to deal with this problem.
One is to use some external form of storage (like writing it down
in a datebook), and having a daemon which periodically wakes up
and tells you to look in your external storage to see if anything
important is there. Of course this also has its costs.
By virtue of the relative newness of computer science, I think
there will be opportunities for applying the lessons we have
learned about machine behavior to other fields for some time to come.
(Since it is only recently that the need for rigorous treatment
of models of computation has induced us to really make some
progress in understanding these things.)
------------------------------
End of AIList Digest
********************
∂28-May-86 1641 LAWS@SRI-AI.ARPA AIList Digest V4 #133
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 28 May 86 16:39:34 PDT
Date: Wed 28 May 1986 10:12-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #133
To: AIList@SRI-AI
AIList Digest Wednesday, 28 May 1986 Volume 4 : Issue 133
Today's Topics:
Reviews - Spang Robinson Report, Volume 2 No. 5 &
International Journal of Intelligent Systems,
Logic Programming - Benchmarking KBES-Tools,
Policy - Abstracts of Technical Talks,
Seminars - Analogical and Inductive Reasoning (SU) &
Reasoning about Semiconductor Fabrication (SU) &
Levels of Knowledge in Distributed Computing (SU)
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Spang Robinson Report, Volume 2 No. 5
Summary of Spang Robinson Report, May 1986 Volume 2, 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
AI at Darpa, the U. S. Department of Defense's Advanced Research
Projects Agency
This year, DARPA will devote $60 million dollars to AI research.
26 million of this is for basic AI research not included in Strategic
computing, 22 million is for technology base research in Strategic
Computing and 25 million is for large prototype applications in
Strategic computing. In 1985, 47.5 percent of the research
went to industry with 40.7 to universities with the remainder
going to government agencies and federal contract research institutes.
Oak Ridge National Labs is developing a system to assist in the
analysis of budgets.
List of DARPA projects in AI
Autonomous Land Vehicle project
Integration - Martin Marietta
Terrain Data Base - ETL
Vision Based Navigation - University of Maryland
ALV Route Planning Research - Hughes Laboratory
Telepresence System - Vitalink
Navy battle Management
Force Requirements Expert System - TI
Spatial Data Management System - CCA
Combat Action Team - Naval Ocean Systems Center, CMU
Fleet Command Center Battle Management - NOSC
Commander's Display Technology - MIT
Pilot's Associate (two teams)
Team 1: Lockheed, General Electric, Goodyear Aerospace, Teknowledge,
CMU, Search Technologies Defense Systems
Team 2: McDonnel Aircraft, TI
AirLand Battle Management
System Technology definition - MIT
Soldier-Machine Interface - Lockheed
Natural Language Training Aid - Cognitive Systems
AI Planning System - Advanced Decison Systems
Message Fusion - LOGICON
Knowledge Engineering - BDM
Butterfly Benchmarking - BRL/ Los Alamos Labs
Interpretation of Reconnaissance Images
(SAIC, Advanced Decision Systems, TASC, MRJ, Mark Resources, Hughes
Aircraft)
Multiprocessor System Architectures
Tree Machines - Columbia University
Software Workbench - CMU
Programmable Systolic Array - CMU
ADA Compiler Systems - FCS, Inc
Synchronous Multiprocessor Architecture - Georgia Tech
High Performance Multiprocessor - University of California at Berkeley
VLSI design - University of Southern Carolina
Common Lisp Framework - USC-ISI
Data Flow Emulation Facility - MIT
Massive Memory Machine - Princeton University
Connection Machine - Thinking Machines
Natural Language
(BBN, System Development Corporation, University of Massachussetts,
University of Pennsylvania, USC-ISI, New York University, SRI)
Expert System Technology
(BBN, General Electric, Intellicorp, University of Massachusetts,
Tecknowledge, Ohio State University, Stanford University)
Speech Understanding
"250 word speaker-independent system with a large vocabulary" was
demonstrated in 1986
Real Time Speech - BBN
Continuous Speech Understanding - CMU
Auditory Modelling - Fairchild
Acoustic Phonetic-Based Speech - Fairchild
Speech Data Base - TI
Acoustic Phonetics - MIT
Tools for Speech Analysis - MIT
Speech Data Base - MIT
Robust Speech Recognition - Lincoln Labs
Speech Co-Articulation - NBS
Speaker Independence - SRI
Computer Vision
Optical Avoidance and Path Planning - Hughes Research Laboratory
Parallel Algorithms - CMU
Terrain Following - CMU
Dynamic Image Interpretation - University of Massachusetts
Target Motion and Tracking - USC
Reasoning, Scene Analysis - Advanced Decision Systems
Parallel Algorithms - MIT
Spatial Representation Modelling- SRI
Parallel Environments - University of Rochester
Also:
Compact Lisp Machine - Texas Instruments
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Japan Watch
ICOT is developing a new personal use Prolog work station called
PSI-II which will be smaller and faster than the first version, PSI-I.
PSI-II is targeted to cost $55,500. 60 PSI units have already
been installed and the version 2.0 of the operating system has been
replaced.
Sega Enterprises will market in mid-April a Prolog-based personal
computer for CAI for children in elementary school.
Nippon Steel Corporation and Mitsubishi have been testing PROLOG
for process control software.
At the Information Processing Society of Japan's national convention,
30 percent of the papers were AI related.
Fujitsu has a scheduling system for computers which will be used
with a total of 140 CPU's and peripherals for software development
in Fujitsu's Numazu Works.
Mitsubishi Electric has announced an expert sytem for making estimates
of machinery products
NEC says it will use TMS or dependency-directed backtracking in its
PECE system and it will be used in diagnosis.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Other:
Tecknowledge announced revenue of 4 million and income of $180 thousand
for third fiscal quarter.
Symbolics has released version 7.0 of its LISP software.
Kurzweill has raised seven million in its third round of venture capital.
IBM has announced an expert system environment for MVS which is similar
to their product running under VM.
Battelle is developing a natural language interface for databases
which is independent of domain and DBMS. It runs on a Xerox LISP
machine and interfaces with a DBMS on a mainframe. They also
have a package for PC's which links with a mainframe and is
available in French and German
Digitalk's Smalltalk environment, Methods, now can communicate with
remote UNIX computers.
A toolkit for design of voice or telephone application packages
which interfaces with TI-Speech technology, has been announced
by Denniston.
Intermetrics is beta testing its Common LISP 370 for IBM mainframes.
It includes interfaces with C and Fortran.
A District Court found that ArtellIgence's OPS5+ product was developed
by Computer Thought employees during their employment with
Computer Thought. Compuater Thought has a Judgement and permanent injunction
against ArtellIgence.
MIT has started a project to explore the relationship
between symbolic and numeric computing, called Mixed Computing.
------------------------------
Date: Fri 23 May 86 14:09:08-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Math/CS Library--New Journal-International Journal of
Intelligent Systems
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
We have just received volume 1, number 1, spring 1986 of the International
Journal of Intelligent Systems. Ronald R. Yager is the editor and it is
published by John Wiley and Sons. The editorial board include the following
people: Hans Berliner, Ronald Brachman, Richard Duda, Marvin Minsky, Judea
Pearl, Dimitri Poselov, Azriel Rosenfeld, Lotfi Zadeh, Jin Wen Zhang, and
Hans Zimmerman along with others. The following articles are included
in the first issue: Constructs And Phenomena Common To The Semantically-
Rich Domains by Beth Adelson; An Intelligent Computer Vision System by
Su-shing Chen; Hierarchical Representation Of Problem-Solving Knowledge
In A Frame-Based Process Planning System by Dana S. Nau and Tien-Chien
Chang; Toward General Theory of Reasoning With Uncertainty. 1. Nonspecificity
and Fuzziness by Ronald R. Yager; and Review of Heuristics-Intelligent
Strategies for Computer Problem Solving by Judea Pearl, Henri Farreny,
and Henri Prade.
Manuscripts should be submitted to the editor, Dr. Ronald R. Yager,
International Journal of Intelligent Systems, Machine Intelligence
Institute, Iona College, New Rochelle, New York 10801. The journal
will be published quarterly and will keep a balance between the
theoretical and applied, as well as provide a venue for experimental
work.
Harry LLull
------------------------------
Date: 29 Apr 1986 18:51-EDT
From: VERACSD@USC-ISI.ARPA
Subject: Benchmarking KBES-Tools
[Forwarded from the Prolog Digest by Laws@SRI-AI.]
I have come across some recent benchmarks from NASA (U.S.
Gov't MEMORANDUM from the FM7/AI Section, April 3, 1986)
which compared various KBES tools' (ART, OP, KEE & CLIPS)
times for solving the MONKEY-AND-BANANA problem. (This
toy problem is explained in detail along with OPS source
in Brownston et. al.'s "Programming Expert Systems in OPS5".)
Although the benchmarks include backward-chaining solutions
to the problem in both KEE and ART (along with forward
chaining counterparts), there is no PROLOG implementation
in the comparison. I am very interested in a PROLOG
comparison, and am in the process of implementing one.
Unfortunately, I am not (yet) a competent PROLOG programmer
and am currently learning my way around PROLOG on a DEC-20.
Consequently, any advice/suggestions re implementing this
benchmark and timing it effectively would be be useful &
appreciated. (By the way, the time to beat is 1.2 secs. for a
forward-chaining implementation using ART on a 3640 with
4MB main-memory.)
I would be glad to share the results with anyone who offers
assistance. (Or for that matter with whomever is interested.)
------------------------------
Date: Tue, 27 May 1986 20:52 EDT
From: Dr. Alex Bykat <BYKAT%UTCVM.BITNET@WISCVM.WISC.EDU>
Subject: Re: Abstracts of Technical Talks Published on AI-LIST
In AIList V4 #120 Peter R.Spool writes:
>Date: 9 May 86 10:24:22 EDT
>From: PRSPOOL@RED.RUTGERS.EDU
>Subject: Abstracts of Technical Talks Published on AI-LIST
>
> None of us surely, can attend all of the talks announced via the
>AI-LIST. The abstracts which appear have served as a useful pointer for
>me to current research in many different areas. I trust this has been
>true for many of you as well. These abstracts could serve this secondary
>purpose even better, if those people who post these abstracts to the
>network, made an effort to include two addtional pieces of information
>in them:
> 1) A Computer Network address of the speaker.
> 2) One or more references to any recently published material
> with the same, or similar content to the talk.
>I know that this information would help me enormously. I assume the
>same is true of others.
>
Let me echo Peter's request. On a number of occasions I had to bother the
speakers' hosts requesting precisely that kind of information. While many
of the hosts respond graciously and promptly, no doubt they are busy
enough without fending off such requests.
A. Bykat
Center of Excellence - Computer Applications
University of Tennessee
Chattanooga, TN 37402
Acknowledge-To: Dr. Alex Bykat <BYKAT@UTCVM>
[Unfortunately, the people who compose these seminar notices seldom
read AIList. Those of you who wish to influence the notice formats
should contact the authors directly. -- KIL]
------------------------------
Date: Mon 26 May 86 14:57:24-PDT
From: Stuart Russell <RUSSELL@SUMEX-AIM.ARPA>
Subject: Seminar - Analogical and Inductive Reasoning (SU)
PhD Orals Announcement
Analogical and Inductive Reasoning
Stuart J. Russell
Department of Computer Science
Stanford University
Tuesday June 3rd 9.15 a.m.
Building 370 Room 370
I show the need for the application of domain knowledge in analogical
reasoning, and propose that this knowledge must take the form of a new
class of rule called a "determination". By giving determinations a
first-order definition, they can be used to make valid analogical
inferences; I have thus been able to implement determination-based
analogical reasoning as part of the MRS logic programming system.
In such a system, analogical reasoning can be more efficient than
rule-based reasoning for some tasks. Determinations appear to be a
common form of regularity in the world, and form a natural stage in
the acquisition of knowledge. My approach to the study of analogy
can be extended to the general problem of the use of knowledge in
induction, leading to the beginning of a domain-independent theory of
inductive reasoning. If time permits, I will also show how the concept
of determinations leads to a justification and quantitative analysis
of analogy by similarity.
------------------------------
Date: Tue 27 May 86 14:56:47-PDT
From: Christine Pasley <pasley@SRI-KL>
Subject: Seminar - Reasoning about Semiconductor Fabrication (SU)
CS529 - AI In Design & Manufacturing
Instructor: Dr. J. M. Tenenbaum
Title: Modeling and Reasoning about Semiconductor Fabrication
Speakers: John Mohammed and Michael Klein
From: Schlumberger Palo Alto Research and Shiva Multisystems
Date: Wednesday, May 28, 1986
Time: 4:00 - 5:30
Place: Terman 556
Abstract for John Mohammed's talk:
As part of a larger effort aimed at providing symbolic, computer-aided
tools for semiconductor fabrication experts, we have developed
qualitative models of the operations performed during semiconductor
manufacture. By qualitativiely simulating a sequence of these models
we generate a description of how a wafer is affected by the operations.
This description encodes the entire history of processing for the
wafer and causally relates the attributes that describe the structures
on the wafer to the processing operations responsible for creating
those structures. These causal relationships can be used to support
many reasoning tasks in the semiconductor fabrication domain,
including synthesis of new recipes, and diagnosis of failures in
operating fabrication lines.
Abstract for Michael Klein's talk:
Current integrated circuit (IC) process computer-aided design (CAD)
tools are most useful in verifying or tuning IC processes in the
vicinity of an acceptable solution. However, these highly
compute-intensive tools are often used too early and too often in the
design cycle.
Cameo, an expert CAD system, assists IC process designers in
synthesizing photolithography step descriptions before using other CAD
tools. Cameo has a modular knowledge base containing knowledge for all
levels of the synthesis process, including heuristic knowledge as well
as algorithms, formulas, graphs, and tables. It supports the parallel
development of numerous design alternatives in an efficient manner and
links to existing CAD tools such as IC process simulators.
Visitors welcome!
------------------------------
Date: Tue, 27 May 86 17:52:01 pdt
From: Vaughan Pratt <pratt@su-navajo.arpa>
Subject: Seminar - Levels of Knowledge in Distributed Computing (SU)
Speaker: Rohit Parikh
Date: Thursday, June 5, 1986
Time: 9:30-10:45
Place: MJ352
Title: Levels of Knowledge in Distributed Computing
Abstract:
It is well known that the notion of knowledge is a useful one for
understanding distributed computing and in particular,
synchronous and asynchronous communication can be distinguished
by the possibility or impossibility of common knowledge being
achieved. We show that knowledge of facts in distributed systems
can be at various levels, these levels are partially ordered,
and that a characterisation of these levels can be given which
brings together knowledge, regular sets and well partial
orderings (not the same as well founded partial orderings).
------------------------------
End of AIList Digest
********************
∂30-May-86 1209 LAWS@SRI-AI.ARPA AIList Digest V4 #134
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 30 May 86 12:08:18 PDT
Date: Fri 30 May 1986 09:35-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #134
To: AIList@SRI-AI
AIList Digest Friday, 30 May 1986 Volume 4 : Issue 134
Today's Topics:
Query - MIT Research on Symbolic/Numeric Processing,
AI Tools - Functional Programming and AI & Common LISP Style,
References - Neural Networks & Lenat's AM,
Linguistics - 'Xerox' vs. 'xerox',
Psychology - Doing AI Backwards & Learning
----------------------------------------------------------------------
Date: Wed, 28 May 86 14:34:04 PDT
From: SERAFINI%FAE@ames-io.ARPA
Subject: MIT research on symbolic/numeric processing
>>AIList Digest Volume 4 : Issue 133
>>From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
>>Subject: Spang Robinson Report, Volume 2 No. 5
>>MIT has started a project to explore the relationship
>>between symbolic and numeric computing, called Mixed Computing.
Does anybody have more info about this project?
Reply to serafini%far@ames-io.ARPA
Thanks.
------------------------------
Date: 29 May 86 11:32:00 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Functional programming and AI
Date: 21 May 86 13:14:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Do working AI programs really exploit these features a lot?
Eg, do "learning" programs construct unforeseen rules, perhaps
based on generalization from examples, and then use the rules?
Or is functional programming just a trick that happens to be
easy to implement in an interpreted language?
I think this is a slightly odd characterization of `functional
programming.' Maybe I'm confused, but I always thought a `functional
language' meant (in a nutshell) that there are no side effects. In
contrast, the one important `side effect' you're talking about here is
constructing a function at runtime and squirreling it away in a
knowledge base, to be run later. In theory you could do the
squirreling by passing around the whole state of the world and
non-destructively modifying that datastucture as you go, but that's
orthogonal to what you seem to be talking about (besides being
painful).
Whatever it's called -- this indistinguishability between code and
data -- it's true that it's a ``trick,'' but I think it's an important
one. In fact as I think about it now, every AI program I've ever seen
←at←some←point← passes functions around, sticks them in places like on
property lists as demons, and/or mashes together portions of bodies of
different functions and sticks the resulting lambda-expression
somewhere to run later (Well, maybe Mycin didn't (but Teiresias did)).
As far as learning programs that construct functions, it's all in the
eyes of the interpreter. A rule that is going to be run by a rule
interpreter counts as a kind of function (it's just not necessarily in
LISP per se). So, since Tom Mitchell's LEX (for example) builds and
modifies the bodies of heuristic rules which later get applied to the
integration problem, it falls in this category. Tom Diettrich's EG
does something like this too. I'm sure there are jillions of other
examples but I'm not that deep into machine learning.
And of course there's always AM (which by now should be familiar to
all readers of AiList) which (among other things) did random structure
modifications to LISP functions, then ran them to see what they did.
For example, it might start with the following definition of EQUAL:
(defun EQUAL (a b)
(cond ((eq a b) t)
((and (consp a) (consp b))
(and (EQUAL (car a) (car b))
(EQUAL (cdr a) (cdr b))))
(t
nil)))
To generalize the function, it drops one of the conjunctions and
changes its name (including the recursive call):
(defun SOME-NEW-FUNCTION (a b)
(cond ((eq a b) t)
((and (consp a) (consp b))
(SOME-NEW-FUNCTION (cdr a) (cdr b)))
(t
nil)))
Lo and behold, SOME-NEW-FUNCTION is a new predicate meaning
something like "same length list." So there's an existence
proof at least.
Walter Hamscher
------------------------------
Date: 15 May 86 17:42:18 GMT
From: tektronix!uw-beaver!ssc-vax!bcsaic!michaelm@ucbvax.berkeley.edu
(michael maxwell)
Subject: Re: Common LISP style standards.
In article <3787@utah-cs.UUCP> shebs@utah-cs.UUCP (Stanley Shebs) writes:
>Sequence functions and mapping functions are generally preferable to
>handwritten loops, since the Lisp wizards will probably have spent
>a lot of time making them both efficient and correct (watch out though;
>quality varies from implementation to implementation).
I'm in a little different boat, since we're using Franz rather than Common
Lisp, so perhaps the issues are a bit different when you're using Monster, I
mean Common, Lisp... so at the risk of rushing in where angels etc.:
A common situation we find ourselves in is the following. We have a long list,
and we wish to apply some test to each member of the list. However, at some
point in the list, if the test returns a certain value, there is no need to
look further: we can jump out of processing the list right there, and thus
save time. Now you can jump out of a do loop with "(return <value>)", but you
can't jump out of a mapc (mapcar etc.) with "return." So we wind up using
"do" a lot of places where it would otherwise be natural to use "mapcar". I
suppose I could use "catch" and "throw", but that looks so much like "goto"
that I feel sinful if I use that solution...
Any style suggestions?
--
Mike Maxwell
Boeing Artificial Intelligence Center
...uw-beaver!uw-june!bcsaic!michaelm
------------------------------
Date: 27 May 86 21:37:58 GMT
From: ulysses!mhuxr!mhuxn!mhuxm!mhuxf!mhuxi!mhuhk!mhuxt!houxm!mtuxo!mtfmt
!brian@ucbvax.berkeley.edu (B.CASTLE)
Subject: Neural Networks
For those interested in some historical references on
neural network function, the following may be of interest :
Dynamics:
NUNEZ, P.L. (1981). ELECTRIC FIELDS OF THE BRAIN. The
Neurophysics of EEG. Oxford University Press, NY.
This book contains a pretty good overview of EEG,
and also contains an interesting model of brain
dynamics based on neural network connectivity.
Learning:
OJA, E. (1983). SUBSPACE METHODS OF PATTERN RECOGNITION.
Research Studies Press, Ltd. Letchworth, Hertfordshire,
England. (John Wiley and Sons, Inc., New York.)
(For those with a PR background, and those having read
and understood Kohonen).
KOHONEN, T.
(1977) - ASSOCIATIVE MEMORY. A System-Theoretical
Approach. Springer-Verlag, Berlin.
(1980) - CONTENT ADDRESSABLE MEMORIES. Springer-
Verlag, Berlin.
(1984) - SELF-ORGANIZATION AND ASSOCIATIVE MEMORY.
Springer Series in Info. Sci. 8.
Springer-Verlag, New York.
These works provide a basic introduction to the
nature of CAM systems (frame-based only), and
the basic philosophy of self-organization in such
systems.
SUTTON, R.S. and A.G. BARTO (1981). "Toward A Modern Theory
of Adaptive Networks: Expectation and Prediction."
Psychological Review 88(2):135.
This article provides an overview of the 'tuning'
of synaptic parameters in self-organizing systems,
and a reasonable bibliography.
Classic:
MINSKY, M. and S. PAPERT (1968). PERCEPTRONS. An Introduction
to Computational Geometry. MIT Press, Cambridge, MA.
This book should be read by all neural network
enthusiasts.
In a historical context, the Hopfield model is important insofar
as it uses Monte Carlo methods to generate the network behavior.
There are many other synchronous and asynchronous neural network
models in the literature on neuroscience, biophysics, and cognitive
psychology, as well as computer and electrical engineering. I have
amassed a list of over a hundred books and articles, which I will
be glad to distribute, if anyone is interested. However, keep in
mind that the connection machines and chips are still very far
from approaching neural networks in functional capability and
diversity.
brian castle @ att (MT 2D-217 middletown, nj, 07748)
(...!allegra!orion!brian)
(...!allegra!mtfmt!brian)
------------------------------
Date: Thu, 29 May 1986 01:07 EDT
From: "David D. Story" <FTD%MIT-OZ @ MC.LCS.MIT.EDU>
Subject: Need Ref for "Automated Mathematician" by Doug Lenat
Discussion of "Automated Mathematician"
His thesis was in "Knowledge Based Systems on Artful
Dumbness"
- McGraw-Hill - 1982 ISBN 0-07-015557-7.
Wrong again...Oh well, try this one. The price is 20 odd
bucks.
Sorry. I called it Artful Dumbness cause it had to rediscover
primes. In fact it is quite a study - Does anyone have
source?
Working Papers are not referenced in the thesis so the
searcher is on his own. I'm sure they must exist someplace.
Nice bibliography in the back of the Thesis.
------------------------------
Date: Thu, 8 May 86 21:01:58 cdt
From: ihnp4!uiucdcs!ccvaxa!aglew@seismo.CSS.GOV (Andy Glew)
Subject: 'Xerox' vs. 'xerox'?
>It's interesting to note that at one time, "frigidaire" (no caps) was
>considered to be a synonym for "refrigerator." Frigidaire, the
>company, fought this in order not to lose trademark status. How often
>does one hear this usage these days?
>
>Rich Alderson
>Alderson@Score.Stanford.EDU (=SU-SCORE.ARPA)
Do you speak French? Could common usage in another language lead to the loss
of trademark status?
Andy "Krazy" Glew. Gould CSD-Urbana. USEnet: ihnp4!uiucdcs!ccvaxa!aglew
1101 E. University, Urbana, IL 61801 ARPAnet: aglew@gswd-vms
------------------------------
Date: 29 May 86 10:55:41 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Doing AI backwards (from machine to man)
Date: 18 May 86 05:39:39 GMT
From: ernie.Berkeley.EDU!tedrick@ucbvax.berkeley.edu (Tom Tedrick)
More on Barry Kort's "Problem of the right-hand tail"
(ie social persecution of those with high intelligence).
My heart bleeds for those unfortunate people on the right-hand tail.
How about a Take a Genius to Lunch Week. Maybe we could get some rock
stars to do a ``Brain Aid.''
I take it this problem is distinct from the ``problem of the left-hand
tail'' and the ``problem of the right-hand tail against the big hump
in the middle''.
(* * *)
Thus we see again how women have a
brilliant gift for asking seemingly innocent favors which
are really enormously costly. The subtle nature of the problem
makes it difficult to pin down the real poison in their approach.
And it's a good thing you pointed this out. We men better watch out
for those seemingly innocent favors, *especially* from women! Hmm,
poison, you say...
Speaking of favors, please do us all a favor; keep your grim and
pathetic misogyny to yourself. Or send your ravings to bandykin.
(* * *)
I am studying how machines function in order to find better
ways for humans to function.
Why not study how machines live in order to find better ways for
humans to live. Or how machines laugh in order to find better ways
for humans to laugh. Or how machines get over their insecurities in
order to find better ways for humans to get over their insecurities.
(* * *)
(Since it is only recently that the need for rigorous treatment
of models of computation has induced us to really make some
progress in understanding these things.)
Yes, I'm sure there there's a `cybernetic' explanation for all of this.
Walter Hamscher
------------------------------
Date: 9 May 86 05:02:09 GMT
From: ihnp4!ltuxa!ttrdc!levy@ucbvax.berkeley.edu (Daniel R. Levy)
Subject: Re: "The Knowledge"
In article <5500032@uiucdcsb>, schraith@uiucdcsb.CS.UIUC.EDU writes:
> It seems to me that if AI researchers wish to build a system which
> has any versatility, it will have to be able to learn, probably
> in a similar manner to the taxicab drivers. Bierre states this problem:
> "Organize a symbolic recording of an ongoing stream of fly-by
> sensory data, on the fly, such that at any given time as much as
> possible can be quickly remembered of the entire stream."
> Surely computer professionals have better things to do, ultimately,
> than spoonfeed all the knowledge to a computer it will ever need.
As nothing but an interested observer in this discussion (I am in no
wise an AI guru, so please forgive me if I bumble) your observation
indeed makes sense me, that an A.I. system could well do better by
"learning" than by having all its "smarts" hardcoded in beforehand.
But it also seems possible that once a computer system HAS been
"trained" in this way, it should be quite easy to mass produce as
many equally capable copies of that system as desired; just dump its
"memory" and reload it on other systems.
Any comments? Does a "learning" system (or one that knows how to teach
itself) indeed hold more promise than distilling expert human knowledge
and hardcoding it in? Perhaps I've answered my own question, that the
system that can "learn" is better able to adapt to new developments in
the area it is supposed to be "intelligent" in than one which is static.
Maybe the best of both worlds could apply (the distilled human knowledge
coded in as a solid base, but the system is free to expand on that base
as it "learns" more and more)?
--
------------------------------- Disclaimer: The views contained herein are
| dan levy | yvel nad | my own and are not at all those of my em-
| an engihacker @ | ployer or the administrator of any computer
| at&t computer systems division | upon which I may hack.
| skokie, illinois |
-------------------------------- Path: ..!{akgua,homxb,ihnp4,ltuxa,mvuxa,
vax135}!ttrdc!levy
------------------------------
End of AIList Digest
********************
∂03-Jun-86 0111 LAWS@SRI-AI.ARPA AIList Digest V4 #135
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 3 Jun 86 01:11:02 PDT
Date: Mon 2 Jun 1986 22:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #135
To: AIList@SRI-AI
AIList Digest Tuesday, 3 Jun 1986 Volume 4 : Issue 135
Today's Topics:
Queries - OPS5 in PSL & Dempster-Shafer Scoring Rules &
Formal Definition of Lisp Systems & Lazy Evaluation,
Techniques - Common Lisp style
----------------------------------------------------------------------
Date: 29 May 86 19:05:27 GMT
From: mcvax!botter!klipper!fons@seismo (Fons Botman)
Subject: OPS5 in PSL request
I am looking for an OPS5 implementation in PSL.
Please mail any pointers or source to: Kievit@Hlerul5.Bitnet
For a friend
Fons Botman
fons@vu44.UUCP
------------------------------
Date: 30 May 86 01:33:15 GMT
From: kaist!cskaist!mgchung@seismo ([Mingyo Chung])
Subject: Search for scoring rules paper!
To. everyone who can give me a help......
I have broadcast this message for asking a favor. "Interval (or Range)
Extension of Reproducing-Scoring-Rule by Dempster and Shafer's Rule" is my
Master thesis. Thus, I seek for papers on "scoring rules" and have found out
a paper that seems to be related with "scoring rules".By the way, that paper
is not found in Korea.That paper is as follows ::
Lindley, D.V. (1982),"Scoring rules and the inevitability of probability",
International Statistics Review,vol 50, 1-26
Do you have this paper ? Then, would you mind giving me a help ?
Would you please send me a copy of it ?
You can contact me through electronic mail
[Path:mgchung%cskaist%kaist.csnet@CSNET-RELAY]
[Address:
Mingyo Chung
Dept. of Computer Science KAIST
P.O Box 150
CheongRyang
Seoul Korea 131 ]
I'll be waiting for a good response ....
Sincerely Yours.
------------------------------
Date: 5 Jun 86 20:03:44 GMT
From: allegra!mit-eddie!think!caip!seismo!mcvax!euroies!rreilly@ucbvax
.berkeley.edu (Dr Ronan Reilly)
Subject: Formal definition of Lisp systems
Does anyone have references to a system which could be used to
formally define large Lisp program suites? What I have in mind
is something akin to the dataflow system for procedural languages.
Thanks in advance,
Ronan
------------------------------
Date: Mon, 2 Jun 86 09:27 N
From: DESMEDT%HNYKUN52.BITNET@WISCVM.WISC.EDU
Subject: Lisp & lazy evaluation
In AIList Digest V4 #134, Mike Maxwell reluctantly prefers the efficiency
of a hand-coded "do" construction in Lisp, although mapping a function on
a list would be more elegant. Indeed, mapping sometimes causes many
unnecessary computations. Consider the following example:
(defun member (element list)
(apply 'or (mapcar #'(lambda (list-element)
(eql element list-element))
list)))
One solution to prevent wasteful computation is a "lazy" evaluation mechanism,
which computes only as much as is needed by other computations. For example,
the mapping in the above example would be performed only up to the point where
"or" finds a non-nil value and doesn't want to evaluate any more arguments.
Anyway, I don't really want to lecture here, but I want to ask a question:
has anyone out there experimented with lazy evaluation in a Lisp system?
Are any working systems (or prototypes) available? Any good references to
the literature?
Koenraad de Smedt desmedt@hnykun52.bitnet
------------------------------
Date: 29 May 86 15:20:04 GMT
From: allegra!princeton!caip!seismo!ut-sally!utah-cs!shebs@ucbvax.berkeley
.edu (Stanley Shebs)
Subject: Re: Common LISP style standards.
In article <545@bcsaic.UUCP> michaelm@bcsaic.UUCP (michael maxwell) writes:
>I'm in a little different boat, since we're using Franz rather than Common
>Lisp
I remember Franz (vaguely)... :-)
>A common situation we find ourselves in is the following. We have a list,
>and we wish to apply some test to each member of the list. However, at some
>point in the list, if the test returns a certain value, there is no need to
>look further: we can jump out of processing the list right there, and thus
>save time.
Common Lisp provides "some", "every", "notany", and "notevery" functions
which all do variations of what you're asking for. They take a predicate
and one or more sequences as arguments, and apply the predicate to each
element in the sequence, and may stop in the middle. The behavior is
sufficiently specified for you to use side effects in the predicate.
BTW, if these four functions weren't around, Common Lisp would be smaller.
>I suppose I could use "catch" and "throw", but that looks so much like "goto"
>that I feel sinful if I use that solution...
"Sinfulness" is a silly concept that quite a few folks in the computer
community have gotten into - a sort of aftereffect of structured programming.
The *real* reason for using higher-level constructs is efficiency, both
in programmer and execution time.
stan shebs
utah-cs!shebs
------------------------------
Date: Sat, 31 May 1986 11:23 EDT
From: "Scott E. Fahlman" <Fahlman@C.CS.CMU.EDU>
Subject: Common Lisp style
A common situation we find ourselves in is the following. We have a
long list, and we wish to apply some test to each member of the list. ...
Any style suggestions?
Well, if you were using "Monster, I mean Common, Lisp..." there would be
a built-in function to handle this case. If I understand correctly what
you are asking for, the function is FIND-IF. Our attempt to meet
various needs like this is why the language is big. You can't have it
both ways.
In a Lisp without a built-in solution, the right answer is probably to
create your own FIND-IF macro and use it for this case. It creates the
same DO-loop you would have to write, but is much less confusing for the
casual reader and less prone to errors once you get the macro right.
If you find yourself wrestling with a variety of such problems, there
are several iteration packages available that provide a somewhat
perspicuous syntax for the user and that create efficient DO loops of
various kinds. Your Franz Lisp vendor can probably point you to a
version of the LOOP facility that will run on your system. Something of
this sort will probably find its way into standard Common Lisp
eventually, but we are having a hard time deciding on a syntax that we
all can live with.
-- Scott
------------------------------
Date: 1-Jun-86 14:52:27
From: Dan Cerys <Cerys%TILDE%ti-csl.csnet@CSNET-RELAY.ARPA>
Subject: Re: Common LISP style standards
Date: 15 May 86 17:42:18 GMT
From: tektronix!uw-beaver!ssc-vax!bcsaic!michaelm@ucbvax.berkeley.edu
(michael maxwell)
A common situation we find ourselves in is the following. We have a
long list, and we wish to apply some test to each member of the list. ...
It sounds like the function you want is MEMBER-IF. This takes two
required arguments, a predicate and a list. As soon as the predicate
succeeds on one of the elements of the list, the tail of the list is
returned, else NIL is returned.
There is nothing wrong with using DO or any of the mapping functions, as
long as you are using the "best" function for the task. In the case
you've described, MEMBER-IF is perfect because it immediately conveys to
the reader (which may be yourself months after you've written it) what
is being tested for. DOs and RETURNs can hide this meaning. Another
useful variant of DO is DOLIST, which is similar (and prefered by many)
to MAPC. Within our group, we prefer to use the mapping functions only
where they appear to be "natural" to the task (eg, list
transformations). But granted, what is "best" and "natural" depends a
lot on your background and approach to Lisp.
------------------------------
Date: 30 May 86 17:05:44 GMT
From: decvax!cca!lmi-angel!rpk@ucbvax.berkeley.edu (Bob Krajewski)
Subject: Re: Common LISP style standards.
In article <> michaelm@bcsaic.UUCP (michael maxwell) writes:
>In article <3787@utah-cs.UUCP> shebs@utah-cs.UUCP (Stanley Shebs) writes:
>>Sequence functions and mapping functions are generally preferable to
>>handwritten loops, since the Lisp wizards will probably have spent
>>a lot of time making them both efficient and correct (watch out though;
>>quality varies from implementation to implementation).
This is very true. It will be interesting to see how Lisp compiler
technology meets the challenge...
>A common situation we find ourselves in is the following. ...
There are two Common Lispy ways doing this. The first is the use the
function (SOME predicate sequence &rest more-sequences), which returns the
first non-NIL result of the application of the predicate to the each set of
elements of the sequences (like map). Since this is a generic sequence
function that can take either vectors or lists, you'll probably want to
write something like
(some #'(lambda (x)
(when (wonderful-p (sibling x)) (father x)))
(the list a-list))
A good compiler would do two things here: it would first notice that the
only sequence is a list. Thus, the ``stepping'' function for the sequence
type (CDR, and CAR for element selection) is known in advance. And since
that is so, it can open code the loop, thus generating a DO-like thing that
you would have otherwise written by hand.
Another way is to use CATCH and THROW. When the THROW is lexically visible
from the CATCH, very good code can be generated in certain cases. As for
whether it's icky or not, at least CATCH establishes a lexical scope for
where the ``goto'' is valid, when the THROW is visible.
--
Robert P. Krajewski
Internet/MIT: RPK@MC.LCS.MIT.EDU
UUCP: ...{cca,harvard,mit-eddie}!lmi-angel!rpk
------------------------------
Date: 1 Jun 86 17:03:30 GMT
From: allegra!princeton!caip!topaz!harvard!bu-cs!bzs@ucbvax.berkeley.edu
(Barry Shein)
Subject: Re: Common LISP style standards.
[re: Franz Lisp]
>Now you can jump out of a do loop with "(return <value>)", but you
>can't jump out of a mapc (mapcar etc.) with "return." So we wind up using
>"do" a lot of places where it would otherwise be natural to use "mapcar". I
>suppose I could use "catch" and "throw", but that looks so much like "goto"
>that I feel sinful if I use that solution...
>Mike Maxwell
>Boeing Artificial Intelligence Center
Howsabout:
(defun foo (x)
(prog nil
(mapc '(lambda (y)
(cond ((null y) (return 'DONE)) (t (print y))))
x)))
try for example (foo '(a b nil c d))
-Barry Shein, Boston University
------------------------------
Date: 2 Jun 86 17:10:26 GMT
From: hplabs!hplabsc!dsmith@ucbvax.berkeley.edu (David Smith)
Subject: Re: Common LISP style standards.
> In article <3787@utah-cs.UUCP> shebs@utah-cs.UUCP (Stanley Shebs) writes:
> >Sequence functions and mapping functions are generally preferable to
> >handwritten loops, ...
> I'm in a little different boat, since we're using Franz rather than Common
> Lisp, so perhaps the issues are a bit different ...
> Mike Maxwell
CMU incorporated functions of CMUlisp into Franz, and these are apparently
shipped with Franz: at least, on my computer, they are in
/usr/src/ucb/lisp/lisplib/cmufncs.l. One of these functions is the
function some.
(some 'mylist 'func1 'func2)
returns the first tail of mylist for which func1 of its car returns a
non-nil value. Otherwise nil is returned. Successive tails of mylist
are obtained by repeated application of func2 (usually cdr, or nil,
which implies cdr). A nice cover macro for this is "exists".
Example:
(exists i '(2 5 3 8 4 1) (> i 6))
returns (8 4 1).
David Smith
HP Labs
------------------------------
End of AIList Digest
********************
∂03-Jun-86 0325 LAWS@SRI-AI.ARPA AIList Digest V4 #136
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 3 Jun 86 03:25:13 PDT
Date: Mon 2 Jun 1986 23:33-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #136
To: AIList@SRI-AI
AIList Digest Tuesday, 3 Jun 1986 Volume 4 : Issue 136
Today's Topics:
Conferences - DOD Decsion Aiding (Man-Machine Interfaces) &
SLP '86 Program and Tutorial Abstracts
----------------------------------------------------------------------
Date: 30 May 86 10:03:00 EDT
From: "MATHER, MICHAEL" <mather@ari-hq1.ARPA>
Reply-to: "MATHER, MICHAEL" <mather@ari-hq1.ARPA>
Subject: Conference - DOD DECSION AIDING CONF, CALL FOR PAPERS ON MMI
FOURTH ANNUAL WORKSHOP ON COMMAND AND CONTROL DECISION AIDING
NOVEMBER 4-6, 1986
US AIR FORCE MUSEUM AUDITORIUM
WRIGHT-PATTERSON AFB
DAYTON, OH
As is stated in the name of the conference, the workshop will address
decision aiding in military command and control systems. U.S. citizenship and
at least a Secret clearance are required for attendance. Sessions will be
presented in the following areas:
I. Requirements
II. Technology
III. Man-Machine Interface
IV. Test and Evaluation
V. Training Systems
VI. Applications
There will also be a Round Table discussion at the end of the conference.
I am the Chair for the session on Man-Machine Interface. Anyone
working on a Man-Machine Interface project related to command and control,
intelligence, decision aiding, etc. and interested in presenting a paper at
the conference is urged to contact me as soon as possible. I must make a
decision on papers to be presented by 15 Aug 86.
CPT Mike Mather
US Army Research Institute
ATTN: PERI-SF
5001 Eisenhower Ave.
Alexandria, VA 22333-5600
Phone: (202) 274-5477/5482
(AVN) 284-5477/5482
DDN: MATHER@ARI-HQ1
------------------------------
Date: Wed, 28 May 86 15:47:12 MDT
From: keller@utah-cs.ARPA (Bob Keller)
Subject: Conference - SLP '86 Program and Tutorial Abstracts
[Note: this is not the same as the 3rd Int. Conf. on Logic Programming,
London, July 14-18, that was announced in V4 #113. -- KIL]
SCHEDULE
SLP '86
Third IEEE Symposium on
LOGIC PROGRAMMING
September 21-25, 1986
Westin Hotel Utah
Salt Lake City, Utah
Conference Chairperson
Gary Lindstrom, University of Utah
Program Chairperson Local Arrangements Chairperson
Robert M. Keller, University of Utah Thomas C. Henderson, University of Utah
Tutorials Chairperson Exhibits Chairperson
George Luger, University of New Mexico Ross Overbeek, Argonne National Lab.
Program Committee
Francois Bancilhon, MCC William Kornfeld, Quintus Systems
John Conery, University of Oregon Gary Lindstrom, University of Utah
Al Despain, U.C. Berkeley George Luger, University of New Mexico
Herve Gallaire, ECRC, Munich Rikio Onai, ICOT/NTT, Tokyo
Seif Haridi, SICS, Stockholm Ross Overbeek, Argonne National Lab.
Lynette Hirschman, SDC Mark Stickel, SRI International
Peter Kogge, IBM, Owego Sten Ake Tarnlund, Uppsala University
SUNDAY, September 21
19:00 - 22:00 Symposium and tutorial registration
MONDAY, September 22
08:00 - 09:00 Symposium and tutorial registration
09:00 - 17:30 TUTORIALS (concurrent) Please see attached abstracts.
George Luger Introduction to AI Programming in Prolog
University of New Mexico
David Scott Warren Building Prolog Interpreters
SUNY, Stony Brook
Neil Ostlund Theory of Parallelism, with Applications to
Romas Aleliunas Logic Programming
University of Waterloo
12:00 - 17:30 Exhibit set up time
18:00 - 22:00 Symposium registration
20:00 - 22:00 Reception
TUESDAY, September 23
08:00 - 12:30 Symposium registration
09:00 Exhibits open
09:00 - 09:30 Welcome and announcements
09:30 - 10:30 INVITED SPEAKER: W. W. Bledsoe
Some Thoughts on Proof Discovery
11:00 - 12:30 SESSION 1: Applications
The Logic of Tensed Statements in English -
an Application of Logic Programming
Peter Ohrstrom, University of Aalborg
Nils Klarlund, University of Aarhus
Incremental Flavor-Mixing of Meta-Interpreters for
Expert System Construction
Leon Sterling and Randall D. Beer
Case Western Reserve University
The Phoning Philosopher's Problem or
Logic Programming for Telecommunications Applications
J.L. Armstrong, N.A. Elshiewy, and R. Virding
Ericsson Telecom
14:00 - 15:30 SESSION 2: Secondary Storage
EDUCE - A Marriage of Convenience:
Prolog and a Relational DBMS
Jorge Bocca, ECRC, Munich
Paging Strategy for Prolog Based Dynamic Virtual Memory
Mark Ross, Royal Melbourne Institute of Technology
K. Ramamohanarao, University of Melbourne
A Logical Treatment of Secondary Storage
Anthony J. Kusalik, University of Saskatchewan
Ian T. Foster, Imperial College, London
16:00 - 17:30 SESSION 3: Compilation
Compiling Control
Maurice Bruynooghe, Danny De Schreye, Bruno Krekels
Katholieke Universiteit Leuven
Automatic Mode Inference for Prolog Programs
Saumya K. Debray, David S. Warren
SUNY at Stony Brook
IDEAL: an Ideal DEductive Applicative Language
Pier Giorgio Bosco, Elio Giovannetti
C.S.E.L.T., Torino
17:30 - 19:30 Reception
20:30 - 22:30 Panel (Wm. Kornfeld, moderator)
Logic Programming for Systems Programming
WEDNESDAY, September 24
09:00 - 10:00 INVITED SPEAKER: Sten Ake Tarnlund
Logic Programming - A Logical View
10:30 - 12:00 SESSION 4: Theory
A Theory of Modules for Logic Programming
Dale Miller
University of Pennsylvania
Building-In Classical Equality into Prolog
P. Hoddinott, E.W. Elcock
The University of Western Ontario
Negation as Failure Using Tight Derivations for General Logic Programs
Allen Van Gelder
Stanford University
13:30 - 15:00 SESSION 5: Control
Characterisation of Terminating Logic Programs
Thomas Vasak, The University of New South Wales
John Potter, New South Wales Institute of Technology
An Execution Model for Committed-Choice
Non-Deterministic Languages
Jim Crammond
Heriot-Watt University
Timestamped Term Representation in Implementing Prolog
Heikki Mannila, Esko Ukkonen
University of Helsinki
15:30 - 22:00 Excursion
THURSDAY, September 25
09:00 - 10:30 SESSION 6: Unification
Refutation Methods for Horn Clauses with Equality
Based on E-Unification
Jean H. Gallier and Stan Raatz
University of Pennsylvania
An Algorithm for Unification in Equational Theories
Alberto Martelli, Gianfranco Rossi
Universita' di Torino
An Implementation of Narrowing: the RITE Way
Alan Josephson and Nachum Dershowitz
University of Illinois at Urbana-Champaign
11:00 - 12:30 SESSION 7: Parallelism
Selecting the Backtrack Literal in the
AND Process of the AND/OR Process Model
Nam S. Woo and Kwang-Moo Choe
AT & T Bell Laboratories
Distributed Semi-Intelligent Backtracking for a
Stack-based AND-parallel Prolog
Peter Borgwardt, Tektronix Labs
Doris Rea, University of Minnesota
The Sync Model for Parallel Execution of Logic Programming
Pey-yun Peggy Li and Alain J. Martin
California Institute of Technology
14:00 - 15:30 SESSION 8: Performance
Redundancy in Function-Free Recursive Rules
Jeff Naughton
Stanford University
Performance Evaluation of a Storage Model for
OR-Parallel Execution
Andrzej Ciepelewski and Bogumil Hausman
Swedish Institute of Computer Science (SICS)
MALI: A Memory with a Real-Time Garbage Collector
for Implementing Logic Programming Languages
Yves Bekkers, Bernard Canet, Olivier Ridoux, Lucien Ungaro
IRISA/INRIA Rennes
16:00 - 17:30 SESSION 9: Warren Abstract Machine
A High Performance LOW RISC Machine
for Logic Programming
J.W. Mills
Arizona State University
Register Allocation in a Prolog Machine
Saumya K. Debray
SUNY at Stony Brook
Garbage Cut for Garbage Collection of Iterative Programs
Jonas Barklund and Hakan Millroth
Uppsala University
EXHIBITS:
An exhibit area including displays by publishers, equipment manufacturers, and
software houses will accompany the Symposium. The list of exhibitors includes:
Arity, Addison-Wesley, Elsevier, Expert Systems, Logicware, Overbeek
Enterprises, Prolog Systems, Quintus, and Symbolics. For more information,
please contact:
Dr. Ross A. Overbeek
Mathematics and Computer Science Division
Argonne National Laboratory
9700 South Cass Ave.
Argonne, IL 60439
312/972-7856
ACCOMODATIONS:
The Westin Hotel Utah is a gracious turn of the century hotel with Mobil 4-Star
and AAA 5-Star ratings. The Temple Square Hotel, located one city block away,
offers basic comforts for budget-conscious attendees.
MEALS AND SOCIAL EVENTS:
Symposium registrants (excluding students and retired members) will receive
tickets for lunches on September 23, 24, and 25, receptions on September 22 and
23, and an excursion the afternoon of September 24. The excursion will
comprise a steam train trip through scenic Provo Canyon, and a barbeque at Deer
Valley Resort, Park City, Utah.
Tutorial registrants will receive lunch tickets for September 22.
TRAVEL:
The Official Carrier for SLP '86 is United Airlines, and the Official Travel
Agent is Morris Travel (361 West Lawndale Drive, Salt Lake City, Utah 84115,
phone 1-800-621-3535). Special airfares are available to SLP '86 attendees.
Contact Morris Travel for details.
A courtesy limousine is available from Salt Lake International Airport to both
symposium hotels, running every half hour from 6:30 to 23:00. The taxi fare is
approximately $10.
CLIMATE:
Salt Lake City generally has warm weather in September, although evenings may
be cool. Some rain is normal this time of year.
SLP '86 Symposium and Tutorial Registration:
Advance symposium and tutorial registration is available until September 1,
1986. No refunds will be made after that date. Send a check or money order (no
currency will be accepted) payable to "Third IEEE Symposium on Logic
Programming" to:
Third IEEE Symposium on Logic Programming
IEEE Computer Society
1730 Massachusetts Avenue, N.W.
Washington, D.C. 20036-1903
Symposium Registration: Advance On-Site
IEEE Computer Society members $185 $215
Non-members $230 $270
Full-time student members $ 50 $ 50
Full-time student non-members $ 65 $ 65
Retired members $ 50 $ 50
Tutorial Registration: ("Luger", "Warren", or "Ostlund")
Advance On-Site
IEEE Computer Society members $140 $170
Non-members $175 $215
SLP '86 Hotel Reservation:
Mail or Call: phone 801-531-1000, telex 389434
Westin Hotel Utah
Main and South Temple Streets
Salt Lake City, UT 84111
A deposit of one night's room or credit card guarantee is required for arrivals
after 6pm.
Room Rates (circle your choice):
Westin Hotel Utah Temple Square Hotel
single room $60 $30
double room $70 $36
Reservations must be made mentioning SLP '86 by August 31, 1986 to guarantee
these special rates.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
SLP '86 TUTORIAL ABSTRACTS
IMPLEMENTATION OF PROLOG INTERPRETERS AND COMPILERS
DAVID SCOTT WARREN
SUNY AT STONY BROOK
Prolog is by far the most used of various logic programming languages that have
been proposed. The reason for this is the existence of very efficient
implementations. This tutorial will show in detail how this efficiency is
achieved.
The first half of this tutorial will concentrate on Prolog compilation. The
approach is first to define a Prolog Virtual Machine (PVM), which can be
implemented in software, microcode, hardware, or by translation to the language
of an existing machine. We will describe in detail the PVM defined by D.H.D.
Warren (SRI Technical Note 309) and discuss how its data objects can be
represented efficiently. We will also cover issues of compilation of Prolog
source programs into efficient PVM programs.
ARTIFICIAL INTELLIGENCE AND PROLOG:
AN INTRODUCTION TO THEORETICAL
ISSUES IN AI WITH PROLOG EXAMPLES
GEORGE F. LUGER
UNIVERSITY OF NEW MEXICO
This tutorial is intended to introduce the important concepts of both
Artificial Intelligence and Logic Programming. To accomplish this task, the
theoretical issues involved in AI problem solving are presented and discussed.
These issues are exemplified with programs written in Prolog that implement the
core ideas. Finally, the design of a Prolog interpreter as Resolution
Refutation system is presented.
The main ideas from AI problem solving that are presented include: 1) An
introduction of AI as representation and search. 2) An introduction of the
Predicate Calculus as the main representation formalism for Artificial
Intelligence. 3) Simple examples of Predicate Calculus representations,
including a relational data base. 4) Unification and its role both in
Predicate Calculus and Prolog. 5) Recursion, the control mechanism for
searching trees and graphs, 6) The design of search strategies, especially
depth first, breadth first and best first or "heuristic" techniques, and 7) The
Production System and its use both for organizing search in a Prolog data base,
as well as the basic data structure for "rule based" Expert Systems.
The above topics are presented with simple Prolog program implementations,
including a Production System code for demonstrating search strategies. The
final topic presented is an analysis of the Prolog interpreter and an analysis
of this approach to the more general issue of logic programming. Resolution is
considered as an inference strategy and its use in a refutation system for
"answer extraction" is presented. More general issues in AI problem solving,
such as the relation of "logic" to "functional" programming are also discussed.
PARALLELISM IN LOGIC PROGRAMMING
NEIL OSTLUND
ROMAS ALELIUNAS
UNIVERSITY OF WATERLOO
The fields of parallel processing and logic programming have independently
attracted great interest among computing professionals recently, and there is
currently considerable activity at the interface, i.e. in applying the concepts
of parallel computing to logic programming and, more specifically yet, to
Prolog. The application of parallelism to Logic Programming takes two basic
but related directions. The first involves leaving the semantics of sequential
programming, say ordinary Prolog, as intact as possible, and uses parallelism,
hidden from the programmer, to improve execution speed. This has traditionally
been a difficult problem requiring very intelligent compilers. It may be an
easier problem with logic programming since parallelism is not artificially
made sequential, as with many applications expressed in procedural languages.
The second direction involves adding new parallel programming primitives to
Logic Programming to allow the programmer to explicitly express the parallelism
in an application.
This tutorial will assume a basic knowledge of Logic Programming, but will
describe current research in parallel computer architectures, and will survey
many of the new parallel machines, including shared-memory architectures (RP3,
for example) and non-shared-memory architectures (hypercube machines, for
example). The tutorial will then describe many of the current proposals for
parallelism in Logic Programming, including those that allow the programmer to
express the parallelism and those that hide the parallelism from the
programmer. Included will be such proposals as Concurrent Prolog, Parlog,
Guarded Horn Clauses (GHC), and Delta-Prolog. An attempt will be made to
partially evaluate many of these proposals for parallelism in Logic
Programming, both from a pragmatic architectural viewpoint as well as from a
semantic viewpoint.
------------------------------
End of AIList Digest
********************
∂03-Jun-86 0543 LAWS@SRI-AI.ARPA AIList Digest V4 #137
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 3 Jun 86 05:43:27 PDT
Date: Mon 2 Jun 1986 23:39-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #137
To: AIList@SRI-AI
AIList Digest Tuesday, 3 Jun 1986 Volume 4 : Issue 137
Today's Topics:
Review - Expert Systems Strategies,
Psychology - Simulating Insect Behavior,
Bindings & AI Tools - Thinking Machine Inc. & Connection Machines
----------------------------------------------------------------------
Date: Wed 28 May 86 11:43:43-CDT
From: CMP.BARC@R20.UTEXAS.EDU
Subject: Expert Systems Strategies
At a recent AI conference, copies of the April 86 issue of the monthly
newsletter "Expert Systems Strategies" were being distributed.
Almost 14 of the 16 pages dealt with defining "mid-sized tools" and
comparing three of them -- M.1, NExpert and Personal Consultant. The
overall comparison was informative and I think would provide valuable
information to potential buyers of these or any expert system tools.
However, there were a number of shortcomings that made me wonder
whether the newletter came close to justifying its $20+ price per
issue ($247 per year).
First, the comparison was at best on a par with those found in PC
World, Byte, MacUser, etc. But those magazines give you 75-200 pages
of information for $3-4. Of course, they have 75-200 pages of
advertising to help keep their prices down. But the ads are useful
too, and the absence of advertising in "ES Strategies" has not bought
it any apparent degree of independence. The authors of the comparison
(Brian Sawyer and Paul Harmon) seem to be very careful not to step
very hard on anyone's toes. They point out shortcomings, but in an
overly nice fashion. They end up recommending all three products to
various markets. (I would be hard-pressed to recommend one, perhaps
two, of them to anyone.)
Another complaint with "ES Strategies" is the number of errors. The
worst of these concerns a small knowledge base, called Beta, that was
used to test the features of the various systems. Beta is fully
defined, and for each system, a partial representation is shown. Each
representation has at least two, and as many as four, errors. Most
errors simply give variables the wrong values, while some misname
variables or actually misrepresent the knowledge. They also do some
confusing representation. E.g., there are two variables, alpha and
beta-1, which can take on the values HIGH and LOW. In one tool, they
introduce a variable alpha ranging over HIGH and LOW, and a Boolean
beta-1-high. Finally, there is some evidence that the authors did not
even test the products hands-on, especially NExpert and Personal
Consultant. The figures in the review that show the representations
are not actual screens or direct printouts from the systems.
Moreover, the two figures that clearly are copies of actual screens
come from the vendor literature and reviews in other magazines, rather
than from their own extended examples with Beta. In addition, there
is no hard performance or benchmark data.
Of course, there were two pages of "ES Strategies" besides the
mid-sized tool discussion. These were devoted to news items and a
calendar of ES events. It was rather standard fare, readily available
in InfoWorld, Datamation, etc. in the same timeframe. The news was
grouped together in one place, but was by no means exhaustive in its
coverage.
In summary, based on this issue, I might spend $10-20 for a year's
subsription, but certainly not for a single issue. Like many of the
AI tools themselves, it seems overpriced by at least an order of
magnitude. However, in case you're interested in finding out for
yourself, "Expert Systems Stragtegies" is published by
Cutter Information Corp.
1100 Massachusetts Avenue
Arlington, MA 02174-9990
Phone: (617) 648-8700
Telex: 650 100 9891 MCI UW
Dallas Webster
CMP.BARC@R20.UTexas.Edu
{ihnp4 | seismo | ctvax}!ut-sally!batman!dallas
------------------------------
Date: 27 May 86 18:06:54 GMT
From: ihnp4!ihlpg!portegys@ucbvax.berkeley.edu (Tom Portegys)
Subject: Insect rituals
Observing the amazing behavior of a wasp constructing its
nest, I was brought to wonder at the procedures which bring
about this performance. It occurred to me that
if complete algorithms were to be constructed for the
complexity of the wasp's environment, such as lighting,
nest location, and material, that the algorithms
would also have to be incredibly complex.
If, however, rituals were used - small pieces of invariant
stimulus-response behavior which serve to signify the state of the
world, then I would guess that a simple set of algorithms may be
made to operate in a complex environment.
I remember not too long ago hearing that someone had indeed found
something like this to be true for a certain insect - that if
it was disrupted in its nest building at some point, then it would
go back to the beginning and start over.
This also reminds me of Simon's Ant, which looks like it is
employing a very complicated procedure to cross a pebble covered
beach, but most of the complexity is in the environmet, not the
ant.
A ritual would be a highly structured stream of stimulus-response
pairs. It must have an initiating stimulus(i). If in the course
of performing the ritual, an incomplete condition is observed,
for example, a circular nest area is not complete, then it would
trigger another behavior to rectify the situation, for example,
go get more nesting material.
In addition to nest building behavior, homing
abilities could be explained by specific pattern matching
schemes. For example, in order to find its nest, a bee
may rely on a specific pattern of ground figures (which
may be transformed internally to account for the position
of the sun). Upon not finding the specific pattern, it
would commenced a seek behavior (perhaps a spiral search)
until the pattern matches.
The often elaborate courting and signaling rituals of insects also
tends to support the requirement that a specific sequence
of stimulus-response's is at work.
I also think that in simpler life forms, the senses of
smell and hearing are overlooked in importance by the vision
dominated creatures that we are. With smell and hearing ,
a simple set of mechanisms can be utilized to provide effective seeking
performance for food, home, mates, etc, since the organism
simply moves in the direction of the stronger odor or
sound.
Tom Portegys, ..ihnp4!ihlpg!portegys
AT&T Bell Labs
------------------------------
Date: 29 May 86 01:51:00 GMT
From: cad!nike!ll-xn!mit-amt!bc@ucbvax.berkeley.edu (William H Coderre)
Subject: References from my thesis
A while back I sent a note asking for references in regard
to animal behavior simulation using rule systems.
Well, my thesis is done, and here's the references I got.
(I know that some of them are not very complete. If I had
more info, I woulda put it in...)
If anyone desperately needs any of the below, or wants a copy of
my thesis, feel free to drop me a note. I'll try to help out....bc
←←←←←←←←
Agre, Phil, Routines, MIT AI Laboratory Memo 828, May 1985.
Alkon, Daniel L., Learning in a Marine Snail, Scientific American,
June 1983, pp 70 P 84.
Amari, Tom, and Druin, Allison, The Role of Graphics in Expert Systems,
MIT Visible Language Workshop Memo {available through author of
paper}, May 1986.
Batali, John, Computation Introspection, MIT AI Lab Memo number 701,
February 1983.
Braitenberg, Valentino, Vehicles: Experiments in Synthetic
Psychology, MIT Press, 1984.
Camhi, Jeffrey M., The Escape System of the Cockroach, Scientfic
American, December 1982, pp 158 - 172.
Davis, James R., Pesce, computer program modelling fish behavior,
September, 1983.
Davis, Randall, Meta-Rules: Reasoning about Control, MIT AI Lab Memo
number 576, March 1980.
Greilich, Horst, Vehicles, software package for Apple Macintosh, MIT
Press, 1986.
Hofstadter, Douglas R., The Copycat Project: An Experiment in
Nondeterminism and Creative Analogies, MIT AI Lab Memo number
755, January 1984.
Jacobs, Walter, How a Bug's Mind Works {paper in unidentified book;
author has listed affiliation with American University, Washington
DC}.
Kay, Alan C., Trial Vivarium Curriculum, Trial User Interface, Trial
Vivarium Graphics and Animation, Trial Vivarium Moist Models,
1985 {four papers on aspects of the Vivarium project, available by
contacting author}.
Kay, Alan C., Computer Software, chapter of Computer Software,
Scientific American, 1984.
Kehler, Thomas P., and Clemenson, Gregory D., KEE, The Knowledge
Engineering Environment for Industry, Intelligenetics, Inc., 1983
{paper available from Intelligenetics, 124 University Ave, Palo Alto,
CA}.
Lenat, Douglas B., Beings: Knowledge as interacting experts, Procedings
of the Fourth IJCAI, pp 126 - 133, 1975.
Lenat, Douglas B., and Harris, Gregory, Designing a Rule System that
Searches for Scientific Discoveries, CMU Department of Computer
Science.
Lenat, Douglas B., and Brown, John Seeley, Why AM and EURISKO
Appear to Work, Artificial Intelligence, 1984, pp 269 P 294.
Lorenz, Konrad, King Solomon's Ring, Thomas Y. Crowell Company,
1952.
MacLaren, Lee S., A production system architecture based on biological
examples, PhD thesis, U. Washtington Seattle, 1978 {available as
University Microfilms order number 79-17604}.
Minsky, Marvin, The Society of Mind, Simon and Schuster, 1986 or
1987 {this author greatly thanks Professor Minsky for a
pre-publication copy of his forthcoming book}.
Robot Odyssey I, The Learning Company {computer game widely
available}.
Stefik, Mark, et al., Knowledge Programming in Loops: Report on an
Experimental Course, AI Magazine, Fall 1983.
Various Authors, The Brain, Scientific American, 1979.
------------------------------
Date: 29 May 86 07:04:18 GMT
From: cad!nike!think!bruce@ucbvax.berkeley.edu (Bruce J. Nemnich)
Subject: Re: Help on Thinking Mach. Inc.
Hi. Yes, Thinking Machines is indeed on the net, though we don't have
many people here who read news. Let me try to address your questions.
I'm adding net.ai to the distribution since they are probably
interested in the Connection Machine System, too.
> * How is the CM programmed?
> ...
> The Connection Machine Lisp manual is, evidently, unavailable to those
> of us who do not have a million dollars to by a CM.
The CMLisp manual isn't available becuase the language is still in the
design and implementation phases; i.e., it's not stable enough yet to
give out manuals. It certainly has nothing to do with money. It will
be a lot like the language in Danny Hillis's book (which was written
before we had working hardware). I hear there will be a paper on it
by Danny and Guy Steele at an upcoming Lisp conference; sorry I don't
know the details.
Many people have many different ideas about how to program the CM, and
there has already been quite an evolution of lisp-based languages here
in the last couple of years. The language we are distributing with
the first machines is *Lisp ("star-lisp"), which consists of a large
collection of functions and macros for defining and manipulating
parallel datatypes which live in the CM from lisp.
There is also a language being implemented called C* ("C-star"), which
is an extention to C which handles CM datatypes and control flow. The
extensions, which have the syntatic flavor of C++, provide
sophisticated ways of defining layouts of data on the CM, provide
control flow, etc.
There's no one answer to "How does one program the CM?", but most
applications to which the CM (or any fine-grained SIMD architecture)
is particularly well-suited have some kind of inherent data-level
parallelism; i.e., similar computations on a large number of data
points. Typically each datum is assigned a processing element.
Given that, there are two common classes of problems: graph problems
and grid problems. For example, the CM's "connections" give the
ability for one datum to point to other(s), so arbitrary data graphs
can be constructed. Such a graph could be searched for a given datum
in constant time: every processor just compares its value to the
desired value (regular SIMD, no use of pointers). Distance on the
graph between two data can be found in time proportional to the
distance by chasing pointers (fanning out) in parallel from one until
one of the paths reaches the other. Simple logic simulation can be
done by setting up such a graph with the outputs of one element
pointing to an input of another. Each element would compute output
values based on its inputs and pass them along to the next level.
The grid problems are those which also rely on communication between
elements but don't require general pointers; they only require local
communication on a cube or grid. The Connection Machine has
facilities for local communication on the cube or grid which are
faster than the more general pointer-like mechanism.
> Imagine a message is traveling through the CM and it encounters a hot
> spot. The message is rerouted. Just after it is rerouted, the
> congestion clears and the next message to that destination goes
> through the direct binary n-cube path. The two messages may now be
> out of order, since the second message could arrive before the first.
Messages are addressed to a given processor and memory address within
the processor. Usually an operation is something like "everyone who
is currently selected send the N bits in your memory beginning at S to
the processor whose address is stored in your memory at M, and put
them in his memory at D." If there are collisions in destinations
(two or more are both trying to send to the same processor/address),
they are combined in some way (current choices are: IOR, AND, XOR,
ADD, MIN, MAX, OVERWRITE (overwrite means just one is delivered)).
There is no order to messages within a delivery cycle. One could sum
a field over all the processors by having everyone send the field to
processor 0 and specifying to combine collisions by adding them
together (there are more efficient ways of doing this, though).
> * How does the CM handle processor failure?
> How are messages rerouted to avoid the failed PE?
You're right, fault tolerance is very important. We have though about
it, though there's a lot we want to implement which isn't yet in the
hardware. The current hardware/software basically assumes things are
working. Reasonably quick diagnostics can be run to verify with high
confidence that nothing's broken. Almost all failures are on CM
matrix boards, of which there are 128 identical copies in a 64k-proc
CM. They're easy to swap.
But since we think a 64k CM is small, serious fault-tolerance is
necessary. The "router" is the general message-routing mechanism, one
for every 16 processors, currently arranged in a hypercube. There is
currently hardware support for turning off paths (a cube edge) between
routers. If a wire between two routers was broken, that path could be
turned off in software. If a whole router was broken (or perhaps the
processors which belong to it), all paths to it could be turned off.
If a message would normally go over that path, it would instead go
another direction (preferably, but not necessarily, another direction
the message wanted to go).
However, fault tolerance on the local grid and cube communication is
more of a problem, since applications using them typically rely on the
regular topology. A glitch in a 2-d grid, for instance, could be
ignored by effectively bridging out that row and column of the grid.
We don't have any support for that, though.
> Thinking Machines has been, at least with me, very secretive about the
> CM.
> Once a machine is released for sale, information must be released to
> sell it.
Absolutely. On behalf of whomever you talked to, sorry. We don't
mean to be secretive. We have just been through the period of getting
ready trying to announce the machine, scrambling to gather/write the
kind of information you want, and trying to put in place mechanisms to
distribute it. We WANT people to know and think lots about Connection
Machines. Hope I've been of help.
Just in case I'm supposed to say this, "Connection Machine" is a
registered trademark of Thinking Machines Corporation. :-)
--
--Bruce Nemnich, Thinking Machines Corporation, Cambridge, MA
--bruce@think.com, ihnp4!think!bruce; +1 617 876 1111
------------------------------
End of AIList Digest
********************
∂04-Jun-86 0034 LAWS@SRI-AI.ARPA AIList Digest V4 #138
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Jun 86 00:33:56 PDT
Date: Tue 3 Jun 1986 22:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #138
To: AIList@SRI-AI
AIList Digest Wednesday, 4 Jun 1986 Volume 4 : Issue 138
Today's Topics:
Queries - Lisp for Silicon Graphics Machines &
Conditional Independence in Possibility & Curve Fitting Software,
Techniques - Lazy Evaluation,
Psychology - Inside Out,
Description - UMich Cognitive Science and Machine Intelligence Lab
----------------------------------------------------------------------
Date: Tue, 03 Jun 86 10:36:40 -0400
From: ritter@dewey.udel.EDU
Subject: Lisp for Silicon Graphics machines
Our research group (in Chemical Engineering) at the University
of Delaware has just purchased a
Silicon Graphics 3030 workstation, a new version of the 2400 turbo.
We are intersested in obtaining a version of LISP to use with the machine.
Does anyone know what version/company is the best to use?
We plan to build an expert system to predict phase behavior and
are using the Silicon Graphics to display the desired graphics output.
So far, the only company we know of if FranzInc. in CA. which sells
a version of franzlisp.
(We are a little inclined towards a version of Common, since it is
somewhat more universal.)
Any comments or suggestions would be welcomed.
Thank you,
Joe Ritter
ritter@dewey.udel.edu
------------------------------
Date: 3 Jun 86 02:59:23 GMT
From: sdcsvax!caip!seismo!kaist!cskaist!dhkim2@ucbvax.berkeley.edu
(Doohyun Kim)
Subject: Conditional independence in possibility
Hi!
I'm DOOHYUN KIM at KAIST(Korea Advanced Institute of Science and Technology).
It is first time for me to broadcast on these news group.
I'm now search some papers about "Conditional independence of possibility"
These papers are very important in my Master Thesis.
Is there anyone who has following papers ?
Do you have any ideas about how to get these papers?
[1] E. Hisdal, "Conditional possibilities - Independence and
non-interactivity," Fuzzy Sets Systems., vol. 1, pp 283-297, 1978.
[2] E. Hisdal, "A fuzzy 'if then else' relation with guaranteed
correct inference," in Applied System and Cybernetics, G.E. Lasker,
Ed. New York: Pegamon, pp. 2906-2911; also in Fuzzy set and possibility
theory: recent developments, R. R. Yager, Ed. New York: Pegamon, 1982,
pp 204-210.
[3] H. T. Nguyen, "On conditional possibility distributions,"
Fuzzy Sets Systems, Vol 1, pp 299-309, 1978.
Please send a copy to me, if you have.
electlic mail path : dhkim2%cskaist%kaist.csnet@CSNET-REPLY
mail address : DOOHYUN KIM
Dept. of Computer Science,
P.O. BOX 150, CHEONGRYANG,
SEOUL, KOREA 150
------------------------------
Date: Mon 2 Jun 86 12:07:28-PDT
From: Charlie Koo <KOO@su-sushi.arpa>
Subject: Query: curve fitting
I'm interested in getting some information about available software (for IBM
PC) for doing curve fitting. More specifically, given the digitized image
of a circle or part of a circle (2-D), how could we decide:
. whether it is a circle
. what the center and radius of the circle should be?
Thanks.
Charlie
------------------------------
Date: Tue, 3 Jun 86 10:41 EDT
From: Stephen G. Rowley <SGR@SCRC-STONY-BROOK.ARPA>
Subject: Lisp & Lazy Evaluation in AIList Digest V4 #135
Date: Mon, 2 Jun 86 09:27 N
From: DESMEDT%HNYKUN52.BITNET@WISCVM.WISC.EDU
In AIList Digest V4 #134, Mike Maxwell reluctantly prefers the efficiency
of a hand-coded "do" construction in Lisp, although mapping a function on
a list would be more elegant. Indeed, mapping sometimes causes many
unnecessary computations. Consider the following example:
(defun member (element list)
(apply 'or (mapcar #'(lambda (list-element)
(eql element list-element))
list)))
I can't help but point out that, if you're using Common Lisp, this
function is strange on several accounts:
[1] It shadows MEMBER, which can't be a good idea.
[2] It tries to return the first thing in the list eql to element,
whereas the real MEMBER returns a tail of the list or NIL.
[3] It attempts to apply OR, which is a special form and hence cannot be
applied. In this case, you'd use SOME instead.
Obviously, though, I'm just quibbling with your example. Let's move on:
Your statements about the general wastefulness of mapping functions are
true, if you restrict yourself to MAPCAR and friends. However, if you
write your own mapping functions, they can be quite elegant.
Here's an example. I was writing a discrimination net for a pattern
database. Given a pattern, it would search a database for things that
might unify with it, and do something to all of them. (See, for
example, Charniak, Riesbeck, & McDermott's "Artificial Intelligence
Programming", chapters 11 & 14.) For example, a program might want to
print everything that unified with the pattern (foo a ?x), where ?x is a
variable.
The first implementation cried out for lazy evaluation; I didn't want to
compute a list of all the patterns because of consing effects. The
top-level search function returned a stream object (simulation of
laziness) which could be prodded to produce the next answer:
(loop with stream = (search-for-pattern '(foo a ?x))
for next = (next-element stream)
while next
doing (print next))
The second implementation got smarter and made the callers of the search
function package up their intentions in a closure. The search function
would then apply that closure to patterns that it found. The result is
something very mapping-like:
(search-for-pattern '(foo a ?x) #'print)
The second implementation also turned out to be faster and consed less,
although it did use up some more stack space than the first.
Moral: Appropriate use of function closures can often (although not
always) satisfy your needs for lazy evaluation.
------------------------------
Date: Sat, 31 May 86 15:38:33 bst
From: gcj%qmc-ori.uucp@Cs.Ucl.AC.UK
Subject: Inside Out.
This posting is a tangential response to Pat Hayes' posting
in AIList Vol 4 # 125. It is obvious to every child that two
things cannot exist in the same place at once. But a child
does not know what is the other side of the cradle. A child
(and therefore the adult) can never fully expand its spatial
reasoning beyond what the eye can see. Hence, we enter a new
realm; fantasy. For example:-
It is even possible to believe that if I walk into this room,
I will leave reality and enter into a fantasy, eg a film, OR
I wake up one morning and am afraid to open the door in case
I do not recognise the landscape outside.
We carry childhood stories and myths with us to the grave; we
remember the lessons we learnt not only in books and from our
schooling, but also the fairy stories, eg "Alice Through the
Looking Glass" and "The Lion, the Witch and the Wardrobe".
This is more about the distinction between fantasy and reality
than to do with spatial intuition. In the Mind's I, there is
a discussion on whether or not a simulation inside a computer
of a hurricane is any different from the machine's perception
of the real event. To me there would be a world of difference!
In "The Teachings of Don Juan: A Yaqui Way of Knowledge" by
Carlos Castaneda, the author describes, at some point in the
book, his transformation into a bird. His forward begins with
the sentence - "This book is both ethnography and allegory."
But my reading is that he wants you to *believe* his story.
"Choose your own paradigm of reality." -- The Joka.
Gordon Joly
ARPA: gcj%maths.qmc.ac.uk%cs.qmc.ac.uk@cs.ucl.ac.uk
UUCP: ...!seismo!ukc!qmc-ori!gcj
------------------------------
Date: Sat, 31 May 86 15:59:20 bst
From: gcj%qmc-ori.uucp@Cs.Ucl.AC.UK
Subject: Inside Out - Postscript.
The best model we have for the universe is Einstein's theory
of general relativity, which models the cosmos as a 4 dimen-
sional pseudo-Riemannian spacetime. The geometry of such a
model is far from intuitive.
Gordon Joly
ARPA: gcj%maths.qmc.ac.uk%cs.qmc.ac.uk@cs.ucl.ac.uk
UUCP: ...!seismo!ukc!qmc-ori!gcj
------------------------------
Date: Mon 2 Jun 86 10:24:35-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Inside Out
Some thoughts on Pat Hayes' question:
My own intuition is that either 1) the Tardis is simply bigger
inside, as you suggest, or 2) the doorway is a portal to another
dimension or reality that could contain anything at all. One way
to test the intuition is to ask "What would happen if I cut a
hole through the wall?" The answer would not completely distinguish
the two cases, since it is quite possible that the entire wall
(inside and out) is a portal that maps between two realities;
cutting a hole would create a new door with exactly the same properties
as the original, which would tell us nothing. My own guess, though,
is that cutting a hole from outside the Tardis would reveal some
kind of "machinery" or peculiar spatial structure (such as a gel,
crystal, or other "matrix"), while cutting a hole from the inside
would let you out into Gallifrey, Dr. Who's natural environment.
Such portals have been frequent in science fiction, and the exact
properties that we infer for each depends on the author's presentation.
Some conceptions do lead to greater difficulties than others. In
Robots Have No Tails, Louis Padgett (pseudonym) wrote about a box that
was larger inside [partly] because it mapped into the future and the
universe was shrinking. This lead to difficulties at the interface:
things put inside would shrink, but only after a few seconds, and it
is not clear what would happen to a single object such as your hand
that extended across the portal for that length of time. I find
Pat's "shrinking" hypothesis untenable for this reason. Similar
problems arise at the boundary if the doorway is a transporter.
Another such box is the chest that appears in one episode of the
Dungeons and Dragons cartoon on TV. Move it to a particular place,
open it, and you are likely to find a stairway to an alternate
reality. (This is rather like the holes in time used in the Time
Bandits movie.) The D&D chest has the property that the spatial
mapping between realities is fixed and that the portal itself
moves between them. I assume that the box cannot be moved while
open, which helps cover the main conceptual difficulty: why realities
only connect at certain points, and what happens if the box straddles
two such points.
As for something simply being bigger inside, this doesn't bother me.
As we move, we somehow update our internal maps of our surroundings.
As I turn my head, I somehow rotate my mapping of where everything is
relative to my focal direction. It seems unlikely that I actually
store and update a position vector for every book on my bookshelves;
instead I must be storing relative positions of items in the room and
the relative orientation of myself and the room. One could argue that
our natural tendency to build walls, and perhaps even our tendency to
build rectangular rooms, arises from the mental savings in building
these maps in hierarchically partitioned modules with related coordinate
frames. If we store spatial relations in such a manner, it is easy
to see how the spatial relationships inside a box need not be strongly
linked to those outside the box. Just as we can move the box and its
contents as a whole, we can expand it as a whole (on the inside only!)
without affecting our mapping of the rest of the world.
As for two things not occupying the same space, that's not really true.
It all depends on what you mean by "thing". A forest and a tree occupy
overlapping space. Properties such as color and texture certainly
coexist. Fish and streams seem to interoccupy, and groping around
in streams and holes may be a task for which our spatial decoupling
evolved. I wouldn't even be surprised if fish were perceptually
bigger on the inside than on the outside, since they disgorge a lot of
"stuff" when you open them up that is perceived at a different level
of detail than is the smooth exterior of a fish.
-- Ken Laws
------------------------------
Date: Sun, 1 Jun 86 20:06:44 EDT
From: Gary←M.←Olson%UMich-MTS.Mailnet@MIT-MULTICS.ARPA
Subject: Description - UMich Cognitive Science and Machine Intelligence Lab
THE COGNITIVE SCIENCE AND MACHINE INTELLIGENCE LABORATORY
The University of Michigan
Ann Arbor, Michigan
The Cognitive Science and Machine Intelligence Laboratory (CSMIL) is an
interdisciplinary organization, spanning the fields of artificial
intelligence, cognitive science, and human-computer interaction. It is
sponsored by three colleges at the University of Michigan: the Graduate
School of Business Administration, the College of Engineering, and the
College of Literature, Science, and the Arts (LSA). Its mission is to
facilitate faculty research and graduate training, with a special focus on
cross-college collaborations.
CSMIL faculty are interested in a variety of specific topics in cognition,
such as vision, attention, learning, reasoning, and problem-solving,
whether they are in humans or machines. Some are also interested in
designing and evaluating the interface between humans and computer systems,
or in developing computer tools to augment and extend human cognition.
CSMIL faculty have a broad range of experience in methods relevant to these
problems, including such areas as the design of special computer
architectures, software design and evaluation, artificial intelligence
programming, and the analysis of human cognition.
CSMIL has a range of specific activities. It sponsors various
seminars, colloquia, conferences, and workshops on the U of M campus in
order to facilitate interdisciplinary intellectual exchange. Many of
these are open to the general technical community in the area as well as to
U of M faculty and students. Periodically, CSMIL will sponsor a single set
of focused intellectual activities designed to stimulate progress in one
particular research frontier, devoting an entire semester or academic year
to intellectual exploration of an important topic. CSMIL coordinates
financial support for faculty projects, and also assists in developing
shared research facilities. CSMIL also takes an active role in
disseminating results from the research of U of M faculty through several
publication series tailored to either specific technical audiences or a
more general readership. Finally, CSMIL has a Corporate Affiliates
Program through which U of M faculty and their peers in corporations can
interact on a regular basis.
For further information about any of these activities, contact:
Gary M. Olson, Director
Cognitive Science and Machine Intelligence Laboratory
The University of Michigan
904 Monroe Street
Ann Arbor, Michigan 48109
313-747-4948
Net address: Gary←Olson%UMich-MTS.Mailnet@MIT-Multics.Arpa
------------------------------
End of AIList Digest
********************
∂04-Jun-86 0313 LAWS@SRI-AI.ARPA AIList Digest V4 #139
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Jun 86 03:13:21 PDT
Date: Tue 3 Jun 1986 22:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #139
To: AIList@SRI-AI
AIList Digest Wednesday, 4 Jun 1986 Volume 4 : Issue 139
Today's Topics:
Literature - New Category Codes & Technical Reports #1
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: new category codes
AI15 Truth Maintenance and Non-Monotonic logic
AI16 General AI, Unclassifiable AI, Theory of AI, Philosphy of AI
(things that need one of these classifications but don't fit
in any particular one)
AA26 manufacturing
AA27 space
O06 useful Algorithms, e. g. string matching, computational geometry
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: some definitions
D BOOK28 IEEE International Conference on Robotics and Automation\
%D April 7-10 1986\
%C San Francisco, CA
D MAG19 Soviet Engineering Research\
%V 5\
%N 4\
%D APR 1985
D MAG20 Data Processing\
%V 28\
%N 1\
%D JAN - FEB 1986
D MAG21 Information Systems\
%V 11\
%N 1\
%D 1986
D BOOK29 STACS 86, Third Annual Symposium on Theoretical Computer Science\
%E B. Monien\
%E G. Vidal-Naquet\
%S Lecture Notes in Computer Science\
%V 210\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1986\
%X $20.50 soft bound ISBN 3-540-16078-7
D MAG22 IBM Journal of Research and Development\
%V 30\
%N 1\
%D JAN 1986
D MAG23 Computer Design\
%V 25\
%N 4\
%D FEB 15 1986
D BOOK30 Rewriting Techniques and Applications\
%E J. P. Jouannaud\
%S Lecture Notes inComputer Science\
%V 202\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1986\
%X 440 pages 23 chapters $22.80 ISBN 3-540-15976-2
D MAG24 Optical Engineering\
%V 25\
%N 3\
%D MAR 1986
D BOOK31 Mathematical Methods for Investigating the\
Natural Resources of the Earth from Space\
%I Nauka\
%C Moscow\
%D 1984
D BOOK32 Mathematization of Scientific: Knowledge: Paths\
and Trends\
%I Kazan Gos. Univ\
%C Kazan\
%D 1984
D BOOK33 International Symposium on Programming\
%S Lecture Notes in Computer Science\
%V 202\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1986
D BOOK34 Industrial Applications of Fuzzy Control\
%E M. Sugeno\
%I North Holland\
%D 1985
D BOOK35 Logics of Programs\
%S Lecture Notes in Computer Science\
%V 193\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D MAG24 Mechanism and Machine Theory\
%V 20\
%N 6\
%D 1985
D MAG25 Pattern Recognition\
%V 18\
%N 6\
%D 1985
D MAG26 Information and Control\
%V 63\
%N 1-2\
%D OCT-NOV 1984
D MAG27 The Journal of the Operations Research Society\
%V 37\
%N 1\
%D JAN 1986
D MAG28 John Hopkins Apl Technical Digest\
%V 7\
%N 1\
%D JAN-MAR 1986
D MAG29 Journal of Robotic Systems\
%V 3\
%N 1\
%D SPRING 1986
D BOOK36 Logics and Models of Concurrent Systems (La Colle-sur-Loup 1984)\
%S NATO Adv. Sci. Inst. Ser. F: Comput. Systems Sci\
%V 13\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D BOOK37 Combinatorial Algorithms on Words (Maratea, 1984)\
%S NATO Adv. Sci. Inst. Ser. F: Comput. Systems Sci.\
%V 12\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D BOOK38 Theoretical Aspects of Reasoning About Knowledge\
%E Joseph Y. Halpert\
%I Morgan Kaufman Publishers, Inc.\
%C Palo Alto, CA\
%D 1986\
%X ISBN 0-934613-0404 $18.95
D BOOK39 Eurocal 85, Volume 1\
%S Lecture Notes in Computer Science\
%V 203\
%E B. Buchberger\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D BOOK40 Foundations of Software Technology and Theoretical Computer Science\
%S Lecture Notes in Computer Science\
%V 206\
%E S. N. Maheshwari\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D MAG30 Werkstattstechnik WT Zeitschrift fur Industrielle Fertigung\
%V 76\
%N 1\
%D JAN 1986
D MAG31 International Journal of Robotics Research\
%V 4\
%N 4\
%D Winter 1986
D BOOK41 Artificial Intelligence: Toward Practical Application\
%S GDI Technology Assessment and Management\
%V 1\
%E T. Bernold\
%E G. Albers\
%I North Holland Publishers Co\
%C Amsterdam\
%D 1985
D MAG32 Manufacturing Engineering\
%V 96\
%N 4\
%D APR 1986
D MAG33 Intech\
%V 33\
%N 4\
%D 1986
D MAG34 Roboterstysteme\
%V 2\
%N 1\
%D 1986
D MAG35 Robotica\
%V 3\
%N Part 4\
%D OCT-DEC l985
D MAG36 Journal of Dynamics Systems, Measurement and Control\
%V 107\
%N 4\
%D DEC 1985
D MAG37 Robotersysteme\
%V 1\
%N 4\
%D 1985
D MAG38 International Journal of Man-Machine Studies\
%V 23\
%N 3\
%D SEP 1985
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Technical Reports #1
%A John W. Lloyd
%A Rodney W. Topor
%T Making Prolog More Expressive
%R Technical Report 84/8
%I Department of Computer Science, University of Melbourne
%D June 1984
%P 22
%K first order logic, programming in logic, deductive databases,
query language AI10 AA14 AA09
%O also in Journal of Logic Programming, vol.4, 1984
%X This paper introduces extended programs and extended goals for logic
programming. A clause in an extended program can have an arbitrary first
order formula as its body. Similarly, an extended goal can have an arbitrary
first order formula as its body. The main results of the paper are the
soundness of the negation as failure rule and SLDNF-resolution for extended
programs and goals. We show how the increased expressibility of extended
programs and goals can be easily implemented in any PROLOG system which has
a sound implementation of the negation as failure rule. We also show how
these ideas can be used to implement first order logic as a query language
in a deductive database system. An application to integrity constraints in
deductive database systems is also given.
%A Jacek Gibert
%T J-Machine User's Manual
%R Technical Report 84/10
%I Department of Computer Science, University of Melbourne
%D November 1984
%P 42
%K functional programming, graph reduction machines, combinators
H03
%X
This manual describes an experimental software implementation of a
combinatory reduction machine called the J-Machine. The J-Machine is
mainly oriented towards symbol manipulation and it aims at exposing
flows of data, and a high degree of parallelism in ordinary functional
programs. It executes directly the J'' reduction language which is
based upon a variant of the full combinatory theory. The J'' language
has an associated algebra of functions which allows a user to prove
properties of functional programs with the assistance of the J-Machine.
With the advent of hardware concepts like data driven reduction
architectures, VLSI implementation of the J-Machine appears to be an
attractive proposition. But at the present, the J-Machine is an
interactive interpreter written in the C programming language under
the Unix operating system.
%A Lee Naish
%T Prolog Control Rules
%R Technical Report 84/13
%I Department of Computer Science, University of Melbourne
%D September 1984
%P 12
%K AI10 computational rules
%O a shortened version appeared in Proceedings of the International
Joint Conference on Artificial Intelligence, Los Angeles, 1985
%A Joxan Jaffar
%A Jean-Louis Lassez
%A Michael J. Maher
%T A Logic Programming Language Scheme
%R Technical Report 84/15
%I Department of Computer Science, University of Melbourne
%D November 1984
%P 22
%K theory, semantics AI10 T02
%O also appeared in Logic Programming: Relations, Functions and Equations'',
D.DeGroot and G.Lindstrom (eds.), Prentice-Hall, 1985
%X Numerous extended versions of PROLOG are now emerging. Im order to
provide greater versatility and expressive power, some versions allow
functional programming features, others allow infinite data structures.
However, there is concern that such languages may have little connection
left with logic. In some instances, various logical frameworks have been
proposed to solve this problem. Nevertheless, the crucial point has not
been addressed: the preservation of the unique semantic properties of
logic programs. The significance of our effort here is twofold:
(1) There is a natural logic programming language scheme wherein these
properties hold. (2) Formal foundations for extended versions of
traditional PROLOG can be obtained as instances of this scheme. They
automatically enjoy its properties.
%A T. Y. Chen
%A Jean-Louis Lassez
%A Graeme S. Port
%T Maximal Unifiable Subsets and Minimal Non-unifiable Subsets
%R Technical Report 84/16
%I Department of Computer Science, University of Melbourne
%D November 1984
%P 20
%K unification, backtracking, resolution AI11
%A John W. Lloyd
%A Rodney W. Topor
%T A Basis for Deductive Database Systems
%R Technical Report 85/1
%I Department of Computer Science, University of Melbourne
%D February 1985 (revised April 1985)
%P 22
%K logic programming, first order logic, soundness, integrity constraints
AA14 AA09 T02
%X
This paper provides a theoretical basis for deductive database systems.
A deductive database consists of closed typed first order logic formulas
of the form A<-W, where A is an atom and W is a typed first order formula.
A typed first order formula can be used as a query and a closed typed first
order formula can be used as an integrity constraint. Functions are allowed
to appear in formulas. Such a deductive database system can be implemented
using a PROLOG system. The main results are the soundness of the query
evaluation process, the soundness of the implementation of integrity
constraints, and a simplification theorem for implementing integrity
constraints. A short list of open problems is also presented.
%A Richard A. Helm
%A Catherine Lassez
%A Kimball G. Marriott
%T Prolog for Expert Systems: An Evaluation
%R Technical Report 85/3
%I Department of Computer Science, University of Melbourne
%D June 1985
%P 20
%K TEAS, MARLOWE, knowledge representation AI01 T02
%O also in Proceedings of Expert Systems in Government'', Virginia, 1985
%A John W. Lloyd
%A Rodney W. Topor
%T A Basis for Deductive Database Systems II
%R Technical Report 85/6
%I Department of Computer Science, University of Melbourne
%D February 1985 (revised April 1985)
%P 17
%K AI10 first order logic, soundness, integrity constraints,
query evaluation AA09 AA14
%X
This paper is the third in a series providing a theoretical basis for
deductive database systems. A deductive database consists of closed typed
first order logic formulas of the form A<-W, where A is an atom and W is a
typed first order formula. A typed first order formula can be used as a
query and a closed typed first order formula can be used as an integrity
constraint. Functions are allowed to appear in formulas. Such a deductive
database system can be implemented using a PROLOG system. The main results
of this paper are concerned with the non-floundering and completeness of
query evaluation. We also introduce an alternative query evaluation process
and show that corresponding versions of the earlier results can be obtained.
Finally, we summarize the results of the three papers and discuss the
attractive properties of the deductive database system approach based on
first order logic.
%A Lee Naish
%T The MU-Prolog 3.2 Reference Manual
%R Technical Report 85/11
%I Department of Computer Science, University of Melbourne
%D October 1985
%P 17
%K AI10 T02
%X
MU-PROLOG is (almost) upward compatible with DEC-10 PROLOG, C-PROLOG
and (PDP-11) UNIX PROLOG. The syntax and built-in predicates are therefore
very similar. A small number of DEC-10 predicates are not available and
some have slightly different effects. There are also some MU-PROLOG
predicates which are not defined in DEC-10 PROLOG. However most DEC-10
programs should run with few, if any, alterations.
However, MU-PROLOG is not intended to be a UNIX PROLOG look-alike.
MU-PROLOG programs should be written in a more declarative style.
The non-logical predicates'' such as cut (!), \\=, not and var are
rarely needed and should be avoided. Instead, the soundly implemented
not (~), not equals (~=) and if-then-else should be used and wait
declarations should be added where they can increase efficiency.
This is a reference manual only, not a guide to writing PROLOG programs.
%A Lee Naish
%T Negation and Control in PROLOG
%R Technical Report 85/12
%I Department of Computer Science, University of Melbourne
%D September 1985
%P 108
%K T02 resolution, PhD thesis
%X
We investigate ways of bringing PROLOG closer to the ideals of logic
programming, by improving its facilities for negation and control.
The forms of negation available in conventional PROLOG systems are
implemented unsoundly, and can lead to incorrect solutions. We discuss
several ways in which negation as failure can be implemented soundly.
The main forms of negation considered are not'', not-equals'',
if-then-else'' and all solutions predicates. The specification and
implementation of all solutions predicates is examined in detail.
Allowing quantifiers in negated calls is an extension which is easily
implemented and we stress its desirability, for all forms of negation.
We propose other enhancements to current implementations, to prevent
the computation aborting or looping infinitely, and also outline
a new technique for implementing negation by program transformation.
Finally, we suggest what forms of negation should be implemented in
future PROLOG systems.
%A Lee Naish
%T Negation and Quantifiers in NU-Prolog
%R Technical Report 85/13
%I Department of Computer Science, University of Melbourne
%D October 1985
%P 12
%K T02 control
%X We briefly discuss the shortcomings of negation
in conventional Prolog systems. The design and implementation of the
negation constructs in NU-Prolog are then presented. The major difference
is the presence of explicit quantifiers. However, several other
innovations are used to extract the maximum flexibility from current
implementation techniques. These result in improved treatment of
\*(lqif\*(rq, existential quantifiers, inequality and non-logical primitives.
We also discuss how the negation primitives of NU-Prolog can be
added to conventional systems, and how they can improve the
implementation of higher level constructs.
%A Michael J. Maher
%T Semantics of Logic Programs
%R Technical Report 85/14
%I Department of Computer Science, University of Melbourne
%D September 1985
%P 77
%K logic programming theory, fixed points, fixedpoints, PhD thesis
AI10
%X
This thesis deals with the semantics of definite clause logic programs
in the presence of an equality theory.
Definite clauses are the formal foundation of the PROLOG
programming language.
Definitions of functions and abstract data types use equality.
Many have suggested the incorporation of these features
into a logic programming language
and already there are many of these languages.
This thesis provides a formal foundation for such languages.
The treatment consistently factors out the equality
theory to obtain the effect of a scheme:
any equality theory which satisfies some appropriate conditions
can be used as part of the programming language.
%A Philip W. Dart
%A Justin A. Zobel
%T Conceptual Schemas Applied to Deductive Databases
%R Technical Report 85/16
%I Department of Computer Science, University of Melbourne
%D November 1985
%P 29
%K prolog, query language, graphical interface, conceptual schema,
deductive database AI10
%X Much of the information required in the formulation of a query
is inherent in the database structure.
First order logic is a powerful query language, but does not exploit
this structure or provide an accessible interface for naive users.
A new conceptual schema formalism, based directly on logic, provides
the necessary description of the database structure.
Its graphical representation is
the basis for a simple, concise graphical query language with
the expressive power of first order logic.
%A Kotagiri Ramamohanarao
%A John A. Shepherd
%T A Superimposed Codeword Indexing Scheme for Very Large Prolog Databases
%R Technical Report 85/17
%I Department of Computer Science, University of Melbourne
%D November 1985
%P 20
%K partial match retrieval, Prolog, hashing, descriptors, optimization
T02 AA09
%X This paper describes a database indexing scheme,
based on the method of superimposed codewords,
which is suitable for dealing with very large databases of Prolog clauses.
Superimposed codeword schemes provide a very efficient method of retrieving
records from large databases in only a small number of disk accesses.
This system supports the storage and retrieval of general Prolog terms,
including functors and variables,
and it is possible to store Prolog rules in the database.
%A James A. Thom
%A Kotagiri Ramamohanarao
%A Lee Naish
%T A Superjoin Algorithm for Deductive Databases
%R Technical Report 86/1
%I Department of Computer Science, University of Melbourne
%D February 1986
%P 10
%K partial match retrieval, prolog, hashing, joins, optimization, database
relational, deductive AI10 AA09
%X
This paper describes a join algorithm suitable for deductive and
also relational databases which are accessed by computers
with large main memories.
Using multi-key hashing and appropriate buffering, joins can be performed
on very large relations more efficiently than with existing methods.
Furthermore, this algorithm fits naturally into a Prolog top-down computation
and can be made very flexible by incorporating additional Prolog features.
%A Lee Naish
%T Don't Care Nondeterminism in Logic Programming
%R Technical Report 86/?
%I Department of Computer Science, University of Melbourne
%D February 1986
%P 10
%K indeterminism, incompleteness, cut, commit, trust, parallel, proving
AI10
%X
Prolog and its variants are based on SLD resolution, which uses don't know
nondeterminism to explore the search space. Don't care nondeterminism, or
indeterminism, can be introduced by operations such as
commit in Concurrent Prolog, cut in sequential Prolog
and incomplete system predicates. This prevents the whole SLD tree
from being examined. The effect on completeness of programs is of
major importance.
This paper presents a theoretical model of \fIGuarded Clauses\fP, which
subsumes the main features of sequential and concurrent Prologs.
Next, we investigate proving properties of Guarded Clause programs
with restricted input-output modes. We present a methodology for proving
that the indeterminism does not cause finite failure, given certain
input conditions.
%A John W. Lloyd
%T Declarative Error Diagnosis
%R Technical Report 86/?
%I Department of Computer Science, University of Melbourne
%D February 1986
%P 20
%K algorithmic debugging, logic programming AI10 T02 O02 AI01
%X
This paper presents an error diagnoser which finds errors in extended logic
programs and also logic programs which use advanced
control facilities.
The diagnoser is declarative'', in the sense that the programmer
need only know the intended interpretation of an incorrect program
to use the diagnoser. In particular, the programmer needs no
understanding whatever of the underlying computational behaviour
of the PROLOG system which runs the program.
It is argued that declarative error diagnosers will be indispensable
components of advanced logic programming systems, which are currently
under development.
%A J. Heering
%T Partial Evaluation and W-Completeness of Algebraic Specifications
%D 1985
%R Report CS-R8501
%I Centre for Mathematics and Computer Science
%C Amsterdam, The Netherlands
%K AA08
%X f 3,90 14 pages
%A J. C. M. Baeten
%A J. A. Bergstra
%A J. W. Klop
%T Conditional axioms and $alpha beta$ calculus
in process algebra
%D 1985
%R Report CS-R8502
%I Centre for Mathematics and Computer Science
%C Amsterdam, The Netherlands
%K AA08
%X f 3,90 26 pages
%A J. C. M. Baeten
%A J. A. Bergstra
%A J. W. Klop
%T Syntax and Defining Equations for an Interupt Mechanism in
Process Algebra
%D 1985
%R Report CS-R8503
%I Centre for Mathematics and Computer Science
%C Amsterdam, The Netherlands
%K AA08
%X f 7,60 45 p
%A J. W. de\0Bakker
%T Transition Systems, Infinitary Languages and the Semantics of
Uniform Concurrency
%D 1985
%R Report CS-R8506
%I Centre for Mathematics and Computer Science
%C Amsterdam, The Netherlands
%K AA08
%X f 3,90 11 pages
%A J. Heering
%A P. Klint
%T The Efficiency of the Equation Interpreter Compared with the UNH
PROLOG Interpreter
%D 1985
%R Report CS-R8509
%I Centre for Mathematics and Computer Science
%C Amsterdam, The Netherlands
%K T02 AI10
%X f 3,90 13 pages
%A M. L. Kersten
%A H. Weigand
%A F. Dignum
%T A Conceptual Modeling Expert System
%D 1985
%R Report CS-R8518
%I Centre for Mathematics and Computer Science
%C Amsterdam, The Netherlands
%K AI01
%X f 3,90 14 pages
%A N. W. P. van\ Diepen
%A W. P. de\ Roever
%T Program Derivation Through Transformations: the Evolution of
List-Copying Algorithms
%D 1985
%R Report CS-R8520
%I Centre for Mathematics and Computer Science
%C Amsterdam, The Netherlands
%K AA08
%X f 8,80 60 pages
%A Bertrand Meyer
%T The Software Knowledge Base
%R TRCS85-04
%I University of California, Santa Barbara
%K AA08
%A Bernard Nadel
%T The General Consistent Labeling (or Constraint Satisfaction) Problem
%R CRL-TR-2-86
%I University of Michigan Computing Research Laboratory
%K AI03
%A William G. Golson
%T A Complete Proof System for an Acceptance Refusal Model of CSP
%R TR85-19
%I Rice University Department of Computer Science
%K AA08
%A Raghu Ramakrishman
%A Avi Silberschatz
%T Annotations for Distributed Programming in Logic
%R TR-85-15
%D SEP 1985
%I University of Texas at Austin Department of Computer Sciences
%K H03 T02 AI10
%A E. Allen Emerson
%A Chin-Laung Lei
%T Branching Time Logic Strikes Back
%R TR-85-21
%D OCT 1985
%I University of Texas at Austin Department of Computer Sciences
%K temporal logic finite automata infinite strings AA08 AI11
%A Christian Lengauer
%A Chua-Huang Huang
%T A Mechanically Certified theorem about Optimal Concurrency of
Sorting Networks
%R TR-85-23
%D OCT 1985
%I University of Texas at Austin Department of Computer Sciences
%K H03 AI11 AA08
%A A. Udaya Shankar
%A Simon S. Lam
%T Time-Dependent Distributed Systems: Proving Safety, Liveness and
Real-Time Properties
%R TR-85-24
%D OCT 1985
%I University of Texas at Austin Department of Computer Sciences
%K H03 AI11 A08
%X includes information on verification of communication protocols including
HDLC and a transport-layer protocol of window size N
%A E. Allen Emerson
%A A. Prasad Sistla
%T Deciding Full Branching Time Logic
%R TR-85-28
%D NOV 1985
%I University of Texas at Austin Department of Computer Sciences
%K AI10a
%A E. Allan Emerson
%A Joseph Y. Halpern
%T Decision Procedures and Expressiveness in the Temporal Logic of Branching
Time
%R TR-85-29
%D NOV 1985
%I University of Texas at Austin Department of Computer Sciences
%K AI10a
%A E. M. Clarke
%A E. A. Emerson
%A A. P. Sistla
%T Automatic Verification of Finite State Concurrent Systems Using
Temporal Logic Specifications
%R TR-85-31
%D NOV 1985
%I University of Texas at Austin Department of Computer Sciences
%K AI10a H03 AA08
%A Benjamin Kuipers
%T The Map-Learning Critter
%R TR-85-33
%D DEC 1985
%I University of Texas at Austin Department of Computer Sciences
%K AI07 AI04
%X "The Critter is an artificial creature which learns, not only the
structure of its (simulated) environment, but also the interpretation
of the actions and senses that give it access to that environment."
%A Newton S. Lee
%A John W. Roach
%T Guess/1: A General Purpose Expert Systems Shell
%R 85-3
%I Virginia Tech Computer Science Department
%K T03
%A John W. Roach
%A Glenn Fowler
%T Virginia Tech Prolog/Lisp A Dual Interpeter Implementation
%R 85-18
%I Virginia Tech Computer Science Department
%K T01 T02
%A J. Patrick Bizler
%A Layne T. Watson
%A J. Patrick Sanford
%T Spline Based Recognition of Straight Lines and Curves in Engineering
Line Drawing
%R 85-29
%I Virginia Tech Computer Science Department
%K AI06 AA05
%A Richard E. Nance
%A Robert L. Moose
%A Robert V. Foutz
%T A Statistical Technique for Comparing Strategies: An Example from
Computer Network Design
%R 85-26
%I Virginia Tech Computer Science Department
%K O04 AA08 AI09 anova
%A T. C. Hu
%A M. T. Shing
%T The Alpha-Beta Routing
%R TRCS85-08
%I University of California, Santa Barbara
%K AA05
------------------------------
End of AIList Digest
********************
∂04-Jun-86 0548 LAWS@SRI-AI.ARPA AIList Digest V4 #140
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Jun 86 05:47:58 PDT
Date: Tue 3 Jun 1986 23:02-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #140
To: AIList@SRI-AI
AIList Digest Wednesday, 4 Jun 1986 Volume 4 : Issue 140
Today's Topics:
Literature - Technical Reports #2
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Technical Reports #2
%A M. A. Fulk
%T A Study of Inductive Inference Machines
%R 85-10
%D August 1985
%I SUNY Buffalo Computer Science
%K AI04
%X Inductive inference machines (IIMs) model learning and
scientific theory formation.
We investigate
IIMs that attempt to synthesize (in the limit) a program for
a function as they receive data (in the form of input-output pairs)
about that function.
We show that a postdictively consistent IIM can be
effectively replaced with a postdictively complete IIM
that succeeds on all of the functions that the original did.
We also investigate IIMs that attempt to synthesize (again in the limit)
a program that enumerates an r.e. set as they receive data
consisting of the elements of that set.
Finally, we propose new criteria
for success in inductive inference.
%R 85-11
%A J. S. Royer
%T A connotational theory of program structure
%D September 1985
%I SUNY Buffalo Computer Science
%K AA08
%R 85-12
%T Local symmetry computation for shape description
%A G. W. Lee
%A S. N. Srihari
%D September 1985
%I SUNY Buffalo Computer Science
%K AI06
%R 85-13
%T ROCS: A system for reading off-line cursive script
%A R. M. Bo\o'z\(hc'inovi\o'c\(aa'
%A S. N. Srihari
%D September 1985
%I SUNY Buffalo Computer Science
%K AI06
%R UMCS-85-8-1
%A David E. Rydeheard
%A Rod M. Burstall
%T The Unification of Terms: A Category-Theoretic Algorithm
%I The University of Manchester, Department of Computer Science
%K AI11
%X no charge
As an illustration of the role of abstract mathematics in
program design, an algorithm for the unification of terms is derived
from constructions of colimits in category theory.
%A Trevor P. Hopkins
%R UMCS-85-9-2
%T Image Transfer by Packet-switched Network
%I The University of Manchester, Department of Computer Science
%K AI06
%X no charge
The advantages and disadvantages of
using packet-switching technology for the transfer of image information
in real time are considered. An experimental implementation
of parts of a system based on a high-speed
Local Area Network is described; these include a
simple screen output device and a real-time camera input device. The
generation of images using a number of microprocessors is also
described. A number of applications for such a system are
investigated and the extension of this approach to implement an
Integrated Information Presentation system is considered.
%A Howard Barringer
%R UMCS-85-9-3
%T Up and Down the Temporal Way
%I The University of Manchester, Department of Computer Science
%K AA08 elevator
%X no charge
A formal specification of a multiple lift system is constructed.
The example illustrates and justifies one of many possible
system specification styles based on temporal techniques.
%A Ru-qian Lu
%T Expert Union: United Service of Distributed Expert Systems
%R 85-3
%I University of Minnesota-Duluth
%C Duluth, Minnesota
%D June, 1985
%K AI01 H03
%X A scheme for connecting expert systems in a network called an {\nit
expert union} is described. Consultation scheduling algorithms used to
select the appropriate expert(s) to solve problems are proposed, as
are strategies for resolving contradictions.
%R No 27
%T The Complexity of a Translation
of ∞L∂-calculus to
Categorical Combinators
%A R D Lins
%D April 1985
%I University of Kent at Canterbury
%A Sheldon Klein
%T The Invention of Computationally Plausible Knowledge Systems
in the Upper Paleolithic
%D December 1985
%R TR 628
%I Computer Sciences Department, University of Wisconsin
%C Madison, WI
%K AI08
%X Abstract: The problem of computing human behavior by rules can become
intractable with large scale knowledge systems if the human brain, like a
computer, is a finite state automaton. The problem of making such
computations at a pace fast enough for ordinary social interaction can be
solved if appropriate constraints apply to the structure of those rules.
There is evidence that systems of such constraints were invented in the
Upper Paleolithic, and were of sufficient power to guarantee that the time
necessary for computation of behavior would increase only linearly with
increases in the size and heterogeneity of world knowledge systems.
Fundamentally, there was just one type of computational invention, capable
of unifying the full range of human sensory domains, and consisting of an
analogical reasoning method in combination with a global classification
scheme. The invention may have been responsible for the elaboration of
language and culture structures in a process of co-evolution. The encoding
of the analogical mechanism in iconic visual imagery and myth structures
may have given rise to the phenomenon of Shamanism. The theory is testable,
and one of its implications is that the structuralism of Levi-Strauss has
an empirical foundation.
%A G.T. NGUYEN
%A J. OLIVARES
%D JAN 1985
%R IMAG RR TIGRE 26
%C Grenoble, France
%T SYCSLOG - systeme logique d'integrite semantique
%A M. ADIBA
%A Q.N. BUI
%A J. PALAZZO DE OLIVEIRA
%D JAN 1985
%R IMAG RR TIGRE 23
%C Grenoble, France
%T Notion de temps dans les bases de donnees generalisees
%A A. DANDACHE
%D APR 1985
%R IMAG RR 516
%C Grenoble, France
%T Etude de structures regulieres PLA - ROM dans la partie controle de
microprocesseurs
%A S. GRAF
%A J. SIFAKIS
%D FEB 1985
%R IMAG RR 526
%C Grenoble, France
%T From synchronization tree logic to acceptance model logic
%A H. BALACHEFF
%D MAY 1985
%R IMAG RR 528
%C Grenoble, France
%T Processus de preuves et situations de validation
%A Michel COSNARD
%A Yves ROBERT
%A Denis TRYSTRAM
%D JUL 1985
%R IMAG RR 552
%C Grenoble, France
%T Resolution parallele de systemes lineaires denses par diagonalisation
%K AI11
%A Yves ROBERT
%A Denis TRYSTRAM
%D JUL 1985
%R IMAG RR 553
%C Grenoble, France
%T Un reseau systolique orthogonal pour le probleme du chemin algebrique
%A J.R. BARRA
%A M. BECKER
%A D. BELAID
%A F. CHATELIN
%A C. MAZEL
%D JUN 1985
%R IMAG RR 542
%C Grenoble, France
%T Realisation d'un logiciel d'analyses factorielles avec systeme
d'assistance intelligente a l'utilisateur
%A Jean FONLUPT
%A Denis NADDEF
%D SEP 1985
%R IMAG RR 557
%C Grenoble, France
%T The traveling salesman problem in graphs with some excluded minors
%A Yves DEMAZEAU
%D APR 1985
%R IMAG RR 502
%C Grenoble, France
%T La programmation des jeux: programmation classique et intelligence
artificielle
%K AA17
%A Hicham AL NACHAWATI
%D 1985
%I These Universite, GRENOBLE
%K SEGMENTATION
%K PROCESSUS ARBORESCENT
%K ANALYSE VARIANCE
%K CLASSIFICATION AUTOMATIQUE
%T Processus de classification sequentiels non arborescents pour l'aide au
diagnostic
%W IMAG Mediatheque
%A Xin an PAN
%D 1985
%I These Universite, GRENOBLE
%K MINIMA
%K ALGORITHME ITERATIF
%K RESEAU AUTOMATE
%K AUTOMATE
%K RECONNAISSANCE CARACTERE
%K DISTANCE
%T Experimentation d'automates a seuil pour la reconnaissance de caracteres
%W IMAG Mediatheque
%A Laurent BERGHER
%D 1985
%I These doct. ing., GRENOBLE
%K ANALYSE
%K POTENTIEL
%K VLSI
%K MICROPROCESSEUR
%K ANALYSE IMAGE
%K TEST
%K CAPTEUR IMAGE
%T Analyse de defaillances de circuits VLSI par microscopie electronique a
balayage
%W IMAG Mediatheque
%A Philippe VIGNARD
%D 1985
%I These doct. ing., GRENOBLE
%K REPRESENTATION CONNAISSANCE
%K EXPLOITATION
%K FILTRAGE
%K DISTRIBUTION
%K SEMANTIQUE
%K ANALOGIE
%K TYPOLOGIE
%T Un mecanisme d'exploitation a base de filtrage flou pour une representation
des connaissances centree objets
%W IMAG Mediatheque
%A Prabhaker Mateti
%T Correctness Proof of an Indenting Program
%R Technical Report 80/2
%I Department of Computer Science, University of Melbourne
%D September 1980
%P 59
%K verification, correctness proof, pretty printing, pascal
%O also in Software - Practice and Experience
%K AA08
%X
The correctness of an indenting program for Pascal is proven at an
intermediate level of rigour. The specifications of the program are given
in the companion paper [TR 80/1]. The program is approximately 330 lines
long and consists of four modules: io, lex, stack, and indent. We prove
first that the individual procedures contained in these modules meet their
specifications as given by the entry and exit assertions. A global proof
of the main routine then establishes that the interaction between modules
is such that the main routine meets the specification of the entire program.
We argue that correctness proofs at the level of rigour used here serve
very well to transfer one's understanding of a program to others. We believe
that proofs at this level should become commonplace before more formal
proofs can take over to reduce traditional testing to an inconsequential
place.
%A Joxan Jaffa
%T Presburger Arithmetic with Array Segments
%R Technical Report 81/1
%I Department of Computer Science, University of Melbourne
%D January 1981
%P 8
%K verification, assertion language, decision procedure
AA08
%A John W. Lloyd
%T An Introduction to Deductive Database Systems
%R Technical Report 81/3
%I Department of Computer Science, University of Melbourne
%D April 1981 (revised April 1983)
%P 24
%K T02 AA14 AA09
%O also in Australian Computer Journal, vol.15, 1983
%X
This paper gives a tutorial introduction to deductive database systems.
Such systems have developed largely from the combined application of the
ideas of logic programming and relational databases. The elegant theoretical
framework for deductive database systems is provided by first order logic.
Logic is used as a uniform language for data, programs, queries, views
and integrity constraints. It is stressed that it is possible to build
practical and efficient database systems using these ideas.
%A John W. Lloyd
%T Implementing Clause Indexing for Deductive Database Systems
%R Technical Report 81/4
%I Department of Computer Science, University of Melbourne
%D October 1981
%P 22
%K AA14 AA09
%X
The paper presents a file design for handling partial-match
queries which has wide application to knowledge-based artificial
intelligence systems and relational database systems. The
advantages of the design are simplicity of implementation, the
ability to cope with dynamic files and the ability to optimize
performance with respect to the average number of disk access
required to answer a query.
%A Joxan Jaffar
%A Jean-Louis Lassez
%T A Decision Procedure for Theorems about Multisets
%R Technical Report 81/7
%I Department of Computer Science, University of Melbourne
%D July 1981
%P 37
%K automatic theorem proving, verification, domain dependent reasoning
AI11 AA13
%A Lee Naish
%T An Introduction to MU-Prolog
%R Technical Report 82/2
%I Department of Computer Science, University of Melbourne
%D March 1982 (Revised July 1983)
%P 16
%K T02 muprolog AI10 control negation
%X
As a logic programming language, PROLOG is deficient in two areas:
negation and control facilities. Unsoundly implemented negation
affects the correctness of programs and poor control facilities
affect the termination and efficiency. These problems are illustrated
by examples.
MU-PROLOG is then introduced. It implements negation soundly and
has more control facilities. Control information can be added
automatically. This can be used to avoid infinite loops and find
efficient algorithms from simple logic. MU-PROLOG is closer to the
ideal of logic programming.
%A Joseph Stoegerer
%T Specification Languages - A Survey
%R Technical Report 82/5
%I Department of Computer Science, University of Melbourne
%D June 1982
%P 62
%K software specification, requirements languages, software development tools,
integrated software development support systems, non-procedural languages,
automated analysis tools AA08
%A Joxan Jaffar
%A Jean-Louis Lassez
%A John W. Lloyd
%T Completeness of the Negation-as-failure Rule
%R Technical Report 83/1
%I Department of Computer Science, University of Melbourne
%D January 1983
%P 20
%O also in Proceedings of the Eigth International Joint Conference
on Artificial Intelligence, Karlsruhe, Germany, 1983
%K AI10 finite failure, completion of a program
%X
Let P be a Horn clause logic program and comp(P) be its completion in the
sense of Clark. Clark gave a justification for the negation as failure rule
by showing that if a ground atom A is in the finite failure set of P, then
~A is a logical consequence of comp(P), that is, the negation as failure
rule is sound. We prove here that the converse also holds, that is, the
negation as failure rule is complete.
%A Jean-Louis Lassez
%A Michael J. Maher
%T Closures and Fairness in the Semantics of Logic Programming
%R Technical Report 83/3
%I Department of Computer Science, University of Melbourne
%D March 1983
%P 17
%K semantics, chaotic iteration, SLD resolution, finite failure, T92
%O also in Theoretical Computer Science, vol.29, 1984
%A Jean-Louis Lassez
%A Michael J. Maher
%T Optimal Fixedpoints of Logic Programming
%R Technical Report 83/4
%I Department of Computer Science, University of Melbourne
%D March 1983
%P 15
%K theory, semantics AA08 AI10
%O also in Theoretical Computer Science, vol.30, 1985
%X
From a declarative programming point of view, Manna and Shamir's
optimal fixedpoint semantics is more appealing than the least
fixedpoint semantics. However in standard formalisms of recursive
programming the optimal fixedpoint is not computable while the least
fixedpoint is. In the context of logic programming we show that the
optimal fixedpoint is equal to the least fixedpoint and is computable.
Furthermore the optimal fixedpoint semantics is consistent with Van Emden
and Kowalski's semantics of logic programs.
%A Lee Naish
%T Automatic Generation of Control for Logic Programming
%R Technical Report 83/6
%I Department of Computer Science, University of Melbourne
%D July 1983 (Revised September 1984)
%P 24
%K T02 O02 muprolog, control facilities, coroutines, automatic programming
%O also as Automating Control for Logic Programs'' in Journal of Logic Progrmm
ing, vol.5, 1985
%X
A model for the coroutined execution of PROLOG programs is presented
and two control primitives are described. Heuristics for the control
of database and recursive procedures are given, which lead to algorithms
for generating control information. These algorithms can be incorporated
into a pre-processor for logic programs. It is argued that automatic
generation should be an important consideration when designing control
primitives and is a significant step towards simplifying the task of
programming.
%A Lee Naish
%A James A. Thom
%T The MU-Prolog Deductive Database
%R Technical Report 83/10
%I Department of Computer Science, University of Melbourne
%D November 1983
%P 16
%K muprolog, partial match retrieval, unix T02 AA09 AA14
%X
This paper describes the implementation and an application of a
deductive database being developed at the University of Melbourne.
The system is implemented by adding a partial match retrieval system
to the MU-PROLOG interpreter.
%A David A. Wolfram
%A Jean-Louis Lassez
%A Michael J. Maher
%T A Unified Treament of Resolution Strategies for Logic Programs
%R Technical Report 83/12
%I Department of Computer Science, University of Melbourne
%D December 1983
%P 25
%K soundness, completeness, unification, negation as failure AI10
%O also in Proceedings of the Second International Logic Programming Conference,
Uppsala, Sweden, 1984
%A Lee Naish
%T Heterogeneous SLD Resolution
%R Technical Report 84/1
%I Department of Computer Science, University of Melbourne
%D January 1984
%P 11
%K T02 AI10 resolution, control facilities, intelligent backtracking
%O also in Journal of Logic Programming, vol.4, 1984
%X Due to a significant oversight in the definition of computation rules,
the current theory of SLD resolution is not general enough
to model the behaviour of some PROLOG implementations with advanced
control facilities.
In this paper, Heterogeneous SLD resolution is defined.
It is an extension of SLD resolution which increases the \*(lqdon't care\*(rq
non-determinism of computation
rules and can decrease the size of the search space.
Soundness and completeness, for success and finite failure, are
proved using similar results from SLD resolution.
Though Heterogeneous SLD resolution was originally devised to model current
systems, it can be exploited more fully than it is now.
As an example, an interesting new computation rule is described. It can be seen
as a simple form of intelligent backtracking with few overheads.
%A Koenraad Lecot
%A Isaac Balbin
%T Prolog & Logic Programming Bibliography
%R Technical Report 84/3
%I Department of Computer Science, University of Melbourne
%D May 1984
%P 55
%K classified bibliography AT21 T02 AI10
%O a considerably expanded version appeared as Logic Programming:
A Classified Bibliography'', Wildgrass Books, 1985
%A Lee Naish
%T All Solutions Predicates in Prolog
%R Technical Report 84/4
%I Department of Computer Science, University of Melbourne
%D June 1984
%P 15
%K logic programming, negation, coroutines T02 AI10
%O also in Proceedings of IEEE Symposium on Logic Programming, Boston, 1985
%A Michael J. Maher
%A Jean-Louis Lassez
%A Kmball G. Marriott
%T Antiunification
%R Technical Report 84/5
%I Department of Computer Science, University of Melbourne
%D to appear
%P ?
%K AI10 unification
%A Lee Naish
%A Jean-Louis Lassez
%T Most Specific Logic Programs
%R Technical Report 84/6
%I Department of Computer Science, University of Melbourne
%D to appear
%P ?
%K AI10
%A Rodney W. Topor
%A Teresa Keddis
%A Derek W. Wright
%T Deductive Database Tools
%R Technical Report 84/7
%I Department of Computer Science, University of Melbourne
%D June 1984 (revised August 1985)
%P 27
%K database management, deductive database, query language,
integrity constraint, logic programming, T02 AA14 AA09 T02
AI10
%O also in Australian Computer Journal, vol.?, 1985
%X
A deductive database is a database in which data can be represented
both explicitly by facts and implicitly by general rules. The use of
typed first order logic as a definition and manipulation language for
such deductive databases is advocated and illustrated by examples.
Such a language has a well-understood theory and provides a uniform
notation for data, queries, integrity constraints, views and programs.
We present algorithms for implementing domains, for using atoms with
named attributes, for evaluating queries, and for checking static and
transition integrity constraints. The implementation is by translation
into Prolog and can be performed using a standard Prolog system. The
paper assumes some familiarity with relational databases, logic and Prolog.
%R CSL T.R. 85-281
%T Prolog Memory-Referencing Behavior
%A Evan Tick
%D September 1985
%K T02
%I Computer Systems Laboratory, Stanford University
%X This report describes Prolog data and instruction memory-referenci
ng
characteristics. Prolog exhibits unconventional referencing behavior of
backtracking; the saving and subsequent restoration of a program state.
Backtracking introduces memory bandwidth requirements above those of procedural
languages. The significance of this and other characteristics was measured by
emulating a Prolog architecture running three benchmark programs and simulating
various memory models. The results indicate that so-called determinate
programs require substantial memory bandwidth because of a limited form of
backtracking (shallow). However, this referencing behavior exhibits spatial
locality enabling small memory buffers to reduce the bandwidth requirement. A
modification to the Prolog architecture having the advantage of further
increasing locality is described.
------------------------------
End of AIList Digest
********************
∂04-Jun-86 2330 LAWS@SRI-AI.ARPA AIList Digest V4 #141
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 4 Jun 86 23:29:56 PDT
Date: Wed 4 Jun 1986 20:51-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #141
To: AIList@SRI-AI
AIList Digest Thursday, 5 Jun 1986 Volume 4 : Issue 141
Today's Topics:
Seminars - Synchronizing Plans among Intelligent Agents (SRI) &
Model-Based Reasoning with Causal Ordering (CMU) &
Tree Adjoining Grammars (UPenn) &
Connectionist Expert Systems (GTE) &
Knowledge-Based Design & Qualitative Process Theory (SU) &
FP Rewrite Rules & Parallel Unification (IBM-SJ),
Seminar Series - CSMIL (UMich),
Conference - Call for Papers for IJCAI-87
----------------------------------------------------------------------
Date: Wed 28 May 86 14:31:47-PDT
From: Amy Lansky <LANSKY@SRI-WARBUCKS.ARPA>
Subject: Seminar - Synchronizing Plans among Intelligent Agents (SRI)
SYNCHRONIZING PLANS AMONG INTELLIGENT AGENTS
VIA COMMUNICATION
Charlie Koo (KOO@SUSHI)
Stanford University
11:00 AM, MONDAY, June 2
SRI International, Building E, Room EJ228 (new conference room)
In a society where a group of agents cooperate to achieve certain
goals, the group members perform their tasks based on certain plans.
Some tasks may interact with tasks done by other agents. One way to
coordinate the tasks is to let a master planner generate a plan and
distribute tasks to individual agents accordingly. However, there are
two difficulties. Firstly, the master planner needs to know all the
expertise that each agent has. The amount of knowledge sharply
increases with the number of specialties. Secondly, the
master-planning process will be computationally more expensive than if
each agent plans for itself, since the planning space for the former
is much larger. Therefore, distributed planning is motivated.
The objective of this on-going research is to formalize a model for
synchronizing and monitoring plans independently made by nonhostile
intelligent agents via communication. The proposed model also will
provide means to monitor the progress of plan execution, to prevent
delays, and to modify plans with less effort when delays happen.
In this talk, a commitment-based communication model which allows
agents to track their commitments during execution of plans will be
proposed. It includes a language, a set of communication operators
and a set of commitment tracking operators. The process of
synchronizing plans based on this communication model will also be
described.
Relevant work: Contract Net, nonlinear planners, distributed planners.
------------------------------
Date: 28 May 1986 1217-EDT
From: Yumi Iwasaki <IWASAKI@C.CS.CMU.EDU>
Subject: Seminar - Model-Based Reasoning with Causal Ordering (CMU)
I will be presenting my thesis proposal as follows:
Date : Tuesday, June 3, 1986
Time : 2 pm
Place : WeH 5409
Title : Model-Based Reasoning of Device Behavior with Causal Ordering
Causality plays an important role in human understanding of the world. While a
number of artificial intelligence systems have been built that reason with
causal knowledge, few have addressed the issue of not only representing and
using causal knowledge but also of discovering causal relations in the domain
based on an operational definition of causality. We propose to study
discovering, representing, and using causal knowledge based on the definition
of causal relations given by the theory of causal ordering. The proposed
scheme for causal reasoning has several levels of representation of knowledge,
namely the network representation of processes, equation model of components,
and causal ordering structure. The scheme links the knowledge at the level of
intuitive understanding of processes to the diagnostic level via an
intermediate, more formal model represented as a system of equations. In this
research, we will study application of the concept of causal ordering to a task
of reasoning about physical device behavior by implementing a causal reasoning
program, ACORD, in the domain of a coal power plant. We also expect to
contribute to better understanding of advantages and disadvantages of
model-based and evidential reasoning.
------------------------------
Date: Mon, 2 Jun 86 21:53 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Tree Adjoining Grammars (UPenn)
A STUDY OF TREE ADJOINING GRAMMARS
Vijayshanker
Ph.D. dissertation proposal
1:30pm June 9, 1986; Room 337, Towne Building
The goal of this research is to study a grammatical formalism called Tree
Adjoining Grammars (TAG's). The original motivation for TAG's was linguistic
and subsequent work established their linguistic relevance. Our study
consists of two parts. The first part deals with formal properties of TAG's:
for example, closure properties ; automaton characterizing classes of string
languages and tree languages generated by TAG's. In the second part of our
study, we outline how a syntax driven scheme for providing compositional
semantics of natural languages can be given with the Tree Adjoining Grammars.
Committee: J. H. Gallier
A. K. Joshi (Supervisor)
A. Kroch
R. Larson (MIT)
W. Rounds (U of Michigan, Ann Arbor)
B. L. Webber (Chairperson)
------------------------------
Date: Wed, 4 Jun 86 11:18:18 edt
From: Rich Sutton <rich%gte-labs.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Connectionist Expert Systems (GTE)
"Connectionist Expert Systems in a Noisy World"
by Stephen I. Gallant
This talk will describe a model for connectionist expert systems
(MACIE) and show how it is well suited to noisy and redundant
environments.
Connectionist expert systems are diagnostic expert systems
based upon a connectionist model with several interesting features:
-- They can be generated from training examples (and/or rules)
-- They perform forward chaining to make conclusions and
backward chaining to elicit additional information
-- They give IF-THEN rules to justify their inferences, even
though their knowledge base contains no such rules
-- They are arguably less prone to brittle behavior than
traditional expert systems.
In the talk it will be shown how an expert system for a noisy
and redundant problem can be constructed from: (1) a noise-free
model of an underlying process (perhaps a traditional expert
system) and (2) a model for the noise involved. System generation
is entirely automated.
Where: GTE Labs, Waltham, MA
When: June 11th, 9:30 am
Contact: Rich Sutton, rich@gte-labs.csnet, 617-466-4133 (or 466-4207)
Net address of speaker: sig@northeastern
Also: That afternoon we will have an informal research meeting of
connectionists from GTE, UMass, and Northeastern
Welcome: Visitors are welcome!
------------------------------
Date: Wed 4 Jun 86 10:58:14-PDT
From: Christine Pasley <pasley@SRI-KL>
Subject: Seminars - Knowledge-Based Design & Qualitative Process Theory (SU)
CS529 - AI In Design & Manufacturing
Instructor: Dr. J. M. Tenenbaum
Speaker: Sanjay Mittal
From: Xerox Palo Alto Research Center
Title: Pride: A Knowledge-Based Framework for Design
Guest Speaker: Kenneth Forbus
From: Qualitative Reasoning Group
University of Illinois
Title: Qualitative Process Theory: Selected Topics
Date: Wednesday, June 4, 1986
Time: 4:00 - 5:30
Place: Terman 556
Sanjay Mittal's abstract:
This talk will describe the Pride project at Xerox. The first part
of the talk will be about an expert system for the design of paper
transports inside copiers. A prototype version of the system has been in
field test for a year. It has been successfully used on real copier
projects inside Xerox - both for designing and for checking designs
produced by engineers. From an applications point of view we have been
motivated by the following observations: knowledge is often distributed
among different experts; the process of generating designs is
unnecessarily separated from their analysis, leading to long design
cycles; and design is an evolutionary process, i.e., a process of
exploration.
The second part of the talk will describe the framework in Pride for
representing design knowledge and using it to support the design
process. In this framework, the process of designing an artifact is
viewed as knowledge guided search in a multi-dimensional space of
possible designs. The dimensions of such a space are the design
parameters of the artifact. In this view, knowledge is used not only to
search the space but also to define the space. Domain knowledge is
organized in terms of design plans, which are organized around goals.
Conceptually, goals decompose a problem into sub-problems and are the
units for structuring knowledge. Design goals have design methods
associated with them, which specify alternate ways to make decisions
about the design parameters of the goal. The third major element of a
plan are constraints on the design parameters. The framework provides a
problem solver for executing these plans. The problem solver extends
dependency-directed backtracking with an advice mechanism and a context
mechanism for simultaneously maintaining multiple designs.
Kenneth Forbus' abstract:
Much of our commonsense knowledge of the physical world appears to be
organized around a notion of physical processes. Qualitative Process
theory provides a formal language for describing such processes,
including a qualitative representation of differential equations and
the conditions under which they apply. This talk will briefly review
Qualitative Process theory and discuss two topics of current research:
Interpreting measurements taken across time, and a new implementation,
based on an assumption-based truth maintenance system, that provides
roughly two orders of magnitude performance improvement.
------------------------------
Date: Wed, 04 Jun 86 17:46:49 PDT
From: Almaden Research Center Calendar <calendar@IBM.com>
Subject: Seminars - FP Rewrite Rules & Parallel Unification (IBM-SJ)
IBM Almaden Research Center
650 Harry Road
San Jose, CA 95120-6099
RESEARCH CALENDAR
June 9 - 13, 1986
GOOD REWRITE STRATEGIES FOR FP
E. Wimmers, IBM Almaden Research Center
Computer Science Seminar Wednesday, June 11 2:30 P.M. Room: B2-307
In order to implement a language based on rewrite rules, it does not
suffice to know that there are enough rules in the language; we also
need to have a good strategy for determining the order in which to
apply them. But what is good? Corresponding to each notion of having
enough rules, there is a corresponding notion of a good rewrite
strategy. We examine and characterize these notions of goodness, and
give examples of a number of natural good strategies. Although we
have confined ourselves to FP here, we believe that our techniques
(some of which are nontrivial extensions of techniques first used in
the context of lambda-calculus) will apply well beyond the realm of FP
rewriting systems.
Host: J. Backus
...
ON THE PARALLEL COMPLEXITY OF UNIFICATION OF TERMS AND RELATED PROBLEMS
C. Dwork, IBM Almaden Research Center
Comp. Sci. Colloquium Thursday, June 12 3:00 P.M. Room: Rear Audit.
Unification of terms is a well known problem with applications to a
variety of symbolic computation problems. Two terms s and t,
involving function symbols and variables, are unifiable if there is a
substitution for the variables under which s and t become
syntactically identical. For example, f(x,x) and f(g(y),g(g(c))) are
unified by substituting g(c) for y and g(g(c)) for x. As parallel
architectures become technologically feasible, researchers in logic
programming have sought parallel unification algorithms running at
speeds subpolynomial in the length of the input. Unfortunately, the
existence of such an algorithm has been shown to be "popularly
unlikely," in that it would violate commonly held beliefs about the
structure of the class P of problems solvable in polynomial time. Two
special cases of unification are term matching and equivalence
testing, in which one or both of the terms contain no variables,
respectively. In contrast to the case for general unification, term
matching and testing for equivalence can both be solved
deterministically in time O((log n)**2) for inputs of size n, using
M(n**2) processors, where M(k) is the number of sequential operations
needed to multiply k-by-k matrices (roughly k**2.5). The processor
bound can be improved to M(k) if randomization is allowed. This is
joint work with Paris Kanellakis and Larry Stockmeyer.
Host: R. Strong
...
------------------------------
Date: Sun, 1 Jun 86 20:20:38 EDT
From: Gary←M.←Olson%UMich-MTS.Mailnet@MIT-MULTICS.ARPA
Subject: Seminar Series - CSMIL (UMich)
The Cognitive Science and Machine Intelligence Laboratory
(CSMIL) at the University of Michigan has been conducting a
major lecture series this spring, consisting of the
following speakers:
March 31 -- John Anderson, Carnegie-Mellon
April 21 -- Shimon Ullman, M.I.T.
May 5 -- Allen Newell, Carnegie-Mellon
May 12 -- Bobby Inman, M.C.C.
May 19 -- Roger Schank, Yale
June 24 -- Randy Davis, M.I.T.
Anyone interested in further information should contact:
Gary Olson, Director
Cognitive Science and Machine Intelligence Laboratory
University of Michigan
904 Monroe Street
Ann Arbor, Michigan 48109
313-747-4948
net address: Gary←Olson%UMich-MTS.Mailnet@MIT-Multics.Arpa
------------------------------
Date: Wed, 4 Jun 86 23:09:06 edt
From: walker@mouton.bellcore.com (Don Walker)
Subject: Conference - Call for Papers for IJCAI-87
CALL FOR PAPERS: IJCAI-87
Tenth International Joint Conference on Artificial Intelligence
August 23-28, 1987
Milan, Italy
The IJCAI conferences are the main forums for the presentation of artificial
intelligence research to an international audience. The goal of IJCAI-87 is to
promote scientific interchange, within and between all subfields of AI, among
researchers from all over the world. The conference is sponsored by the
International Joint Conferences on Artificial Intelligence, Inc. (IJCAII).
In response to the growing interest in engineering issues within the AI
community, IJCAI-87's Technical Program will have two distinct tracks: science
and engineering. The science papers, presented Sunday through Wednesday
(August 23-26), will stress the computational principles underlying cognition
and perception in man and machine. The engineering papers, presented Tuesday
through Friday (August 25-28), will highlight pragmatic issues that arise in
applying these computational principles. Tutorials will be presented on Sunday
and Monday in parallel with the first two days of the science paper
presentations. Meetings or workshops focussed on specific research issues
might most appropriately be held on Thursday or Friday.
TOPICS OF INTEREST
Authors are invited to submit papers to either the science or engineering
tracks within one of the following topic areas:
- Architectures and Languages (including logic programming, user
interface technology)
- Reasoning (including theorem proving, planning, explaining)
- Knowledge Acquisition and Learning (including knowledge-base
maintenance)
- Knowledge Representation (including task domain analysis)
- Cognitive Modeling
- Natural Language Understanding
- Perception and Signal Understanding (including speech, vision, data
interpretation)
- Robotics
REQUIREMENTS FOR SUBMISSION:
Authors are requested to prepare full papers, no more than 7 proceedings' pages
(approximately 5600 words), or short papers, no more than 3 proceedings' pages
(approximately 2400 words). The full-paper classification is intended for
well-developed ideas, with significant demonstration of validity, while the
short-paper classification is intended for descriptions of research in
progress. Authors must ensure that their papers describe original
contributions to or novel applications of AI, regardless of length
classification, and that the research is properly compared and contrasted with
relevant literature.
DETAILS OF SUBMISSION:
Authors should submit six (6) copies of their papers (hard copy only -- we
cannot accept on-line files) to the Program Chair no later than Monday, January
5, 1987. The following information must be included on the title page:
- Author's name, address, telephone number and computer mail address
(if applicable)
- Paper type (full-paper or short-paper), topic area, track (science or
engineering), and a few keywords for further classification within
the topic area
- An abstract of 100-200 words
The timetable is as follows:
- Submission deadline: 5 January 1987 (papers received after January 5
will be returned unopened)
- Notification of acceptance or rejection: 17 March 1987
- Camera ready copy due: 10 April 1987
The language of the conference is English; all papers submitted should be
written in English.
REVIEW CRITERIA:
Each paper will be reviewed by at least two experts. Acceptance will be based
on the overall merit and significance of the reported research, as well as on
the quality of the presentation. A paper may be reviewed by experts
responsible for an area or track other than the one to which it was submitted
if, in the opinion of a program committee member, it can thereby be more fairly
reviewed.
Papers submitted to the science track should make an original and significant
contribution to knowledge in the field of artificial intelligence.
Papers submitted to the engineering track should focus on pragmatic issues that
arise in reducing AI principles and techniques to practice. Such papers could
identify the critical features of some successful application system's approach
to reasoning or knowledge acquisition or natural language understanding. Of
particular interest are papers that demonstrate insightful analysis of a task
domain motivating the selection of a computational and representational
approach.
CONTACT POINTS:
Submissions and inquiries about the program should be sent to the Program
Chair:
John McDermott
Department of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15213
USA
1-412-268-2599
McDermott@cmu-cs-a.arpa
Inquiries about registration, tutorials, exhibits, and other local arrangements
should be sent to the Local Arrangements Chair:
Marco Somalvico
Dipartimento di Elettronica
Politecnico di Milano
Piazza Leonardo Da Vinci N.32
I-20133 Milano
ITALY
39-2-236-7241
somalvic!prlb2@seismo
Other inquiries should be directed to the General Chair:
Alan Bundy
Department of Artificial Intelligence
University of Edinburgh
80 South Bridge
Edinburgh EH1 1HN
UK
44-31-225-7774 ext 242
Bundy%edinburgh.ac.uk@ucl-cs.arpa
------------------------------
End of AIList Digest
********************
∂05-Jun-86 0157 LAWS@SRI-AI.ARPA AIList Digest V4 #142
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 5 Jun 86 01:57:37 PDT
Date: Wed 4 Jun 1986 20:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #142
To: AIList@SRI-AI
AIList Digest Thursday, 5 Jun 1986 Volume 4 : Issue 142
Today's Topics:
Queries - Getting Started with OPS-5 & Programming Paradigms &
Prolog on IBM/PCs,
Techniques - Common LISP Style,
Physics - Space-Time Structure,
Humor - Brain Theory,
Philosophy - Metaphilosophy Journal on Computer Ethics
----------------------------------------------------------------------
Date: 3 Jun 86 18:18:38 GMT
From: cad!nike!lll-crg!micropro!ptsfa!jeg@ucbvax.berkeley.edu (John Girard)
Subject: Getting Started with OPS-5
We would like to start experimenting with OPS-5, and have heard
that there is a public domain version available, presumably from
C-M-U.
Any help we can get on the following would be appreciated:
Where is the best source for the P-D OPS-5?
Would we be wasting our time to try the P-D version?
If so, which one can we try on an evaluation basis?
What else is needed to make it work?
If LISP is required, can we operate on a limited subset
such as XLISP? If not, what LISP would be easiest to
integrate on an evaluation basis?
Also, any recommendations on readable and usable guides to
OPS-5 will be appreciated!
Thanks!
John Girard
Pacific Bell
(415)823-1961 [USA]
{dual,ihnp4,qantel,decwrl,bellcore}ptsfa!jeg
------------------------------
Date: 3 Jun 86 12:42:33 GMT
From: ulysses!unc!mcnc!duke!jds@ucbvax.berkeley.edu (Joseph D. Sloan)
Subject: Request for Programming Paradigms
A particular interest of mine is programming paradigms.
For example, how does object-oriented programming differ
from logic programming from functional programming from
procedural programming from ... I recently came across
a paper with a brief discussion of access-oriented
programming which is a new paradigm to me. Unfortunately,
I didn't very much out of the description as the context
of the article was comparison of paradigms and really
assumed familiarity. Can anyone supply me with pointers
to readable introductions to access-oriented programming?
How about articles or books on programming paradigms
in general? Reply by mail and I will summarize results
if there is enough interest. (The article I referred to
was "If Prolog is the Answer, What is the Question? or
What it Takes to Support AI Programming Paradigms" by
Daniel G. Bobrow in IEEE Trans. of Soft. Eng., Nov. 1985.
I recommend the article.)
Joe Sloan,
Box 3090
Duke University Medical Center
Durham, NC 27710
(919) 684-3754
duke!jds,
------------------------------
Date: 3 Jun 86 14:46:53 GMT
From: sdcsvax!caip!seismo!columbia!lexington.columbia.edu!polish@ucbvax
.berkeley.edu (Nathaniel Polish)
Subject: Prolog on IBM/PCs
I am looking for a version of Prolog (or other expert system building
tool) for the IBM/PC environment. I am looking for comments on the
real usefulness of these tools.
Thanks
Nat Polish@columbia-20
------------------------------
Date: 3 Jun 86 19:41:30 GMT
From: hplabs!oliveb!glacier!kestrel!king@ucbvax.berkeley.edu (Dick King)
Subject: Re: Common LISP style standards.
From: michaelm@bcsaic.UUCP (michael maxwell)
We have a long list,
and we wish to apply some test to each member of the list. However, at some
point in the list, if the test returns a certain value, there is no need to
look further ...
I'm way behind in this group, so I apologize in advance if you have
seen this solution or a better one before.
You might try
(prog ()
(mapcar #'(lambda (y) (when (you-like y) (return (result-for y))))
x)))
I tried it, and it works. It doesn't seem dirty to me, and it should
be efficient. Even if the return point of a prog is such that it
forces the lexical closure to be non-vacuous, this shouldn't be a
problem when compiled.
--
Mike Maxwell
Boeing Artificial Intelligence Center
...uw-beaver!uw-june!bcsaic!michaelm
-dick
------------------------------
Date: Wed, 4 Jun 86 09:34:08 pdt
From: Marc Majka <majka%ubc.csnet@CSNET-RELAY.ARPA>
Subject: Inside Out
> ...Einstein's theory of general relativity, which models the cosmos
> as a 4 dimensional pseudo-Riemannian spacetime. ...
*pseudo*-Riemannian? I think you mean Semi-Reimannian, and that applies
to the metric, not the spacetime.
---
Marc Majka
------------------------------
Date: 4 Jun 86 06:29 EDT
From: WAnderson.wbst@Xerox.COM
Subject: Humor - Brain Theory
Conscious and subconscious mind:
In your brain are two files. One is read-write. The other is
write-only with global side effects.
(Attributed to a computer science student at Rochester Institute of
Technology.)
Bill Anderson
------------------------------
Date: Tue, 27 May 86 13:36:39 edt
From: rti-sel!dg←rtp!rtp41!dg←rama!bruces%mcnc.csnet@CSNET-RELAY.ARPA
Subject: Computer Ethics
[Forwarded from the Risks Digest by Laws@SRI-AI.]
The following is a copy of a review I wrote for a recent newsletter of the
Boston chapter of Computer Professionals for Social Responsibility (CPSR).
Readers of RISKS may be interested, as well.
METAPHILOSOPHY is a British journal published three times yearly which is
dedicated to considerations about particular schools, fields, and methods of
philosophy. The October 1985 issue, Computers & Ethics (Volume No. 16, Issue
No. 4), is recommended reading [...].
This issue's articles attempt to define and delimit the scope of Computer
Ethics, and examine several emerging and current concerns within the field.
One current concern is responsibility for computer-based errors. In his
article on the subject, John W. Snapper asks: "...whether it is advisable to
...write the law so that a machine is held legally liable for harm." The author
invokes Aristotle's "Nichomachean Ethics" (!) in an analysis of how computers
make decisions, and what is meant by "decision" in this context.
On the same subject, William Bechtel goes one step further, considering the
possibility that computers could one day bear not only legal, but moral
responsibility for decision-making: "When we have computer systems that ...can
be embedded in an environment and adapt their responses to that environment,
then it would seem that we have captured all those features of human beings
that we take into account when we hold them responsible."
Deborah G. Johnson discusses another concern: ownership of computer programs.
In "Should Computer Programs Be Owned?," Ms. Johnson criticizes utilitarian
arguments for ownership, as well as arguments based upon Locke's labor theory
of property. The proper limits to extant legal protections, including
copyrights, patents, and trade secrecy laws, are called into question.
Other emerging concerns include the need to educate the public on the dangers
and abuses of computers, and the role of computers in education. To this end,
Philip A. Pecorino and Walter Maner present a proposal for a college level
course in Computer Ethics, and Marvin J. Croy addresses the ethics of
computer-assisted instruction.
Dan Lloyd, in his provocative but highly speculative article, "Frankenstein's
Children," envisions a world where cognitive simulation AI succeeds in
producing machine consciousness, resulting in a possible ethical clash of the
rights of artificial minds with human values.
The introductory article, James H. Moor's "What is Computer Ethics," is an
ambitious attempt to define Computer Ethics, and to explain its importance.
According to Moor, the development and proliferation of computers can rightly
be termed "revolutionary": "The revolutionary feature of computers is their
logical malleability. Logical malleability assures the enormous application of
computer technology." Moor goes on to assert that the Computer Revolution, like
the Industrial Revolution, will transform "many of our human activities and
social institutions," and will "leave us with policy and conceptual vacuums
about how to use computer technology."
An important danger inherent in computers is what Moor calls "the invisibility
factor." In his own words: "One may be quite knowledgeable about the inputs
and outputs of a computer and only dimly aware of the internal processing."
These hidden internal operations can be intentionally employed for unethical
purposes; what Moor calls "Invisible abuse," or can contain "Invisible
programming values": value judgments of the programmer that reside, insidious
and unseen, in the program.
Finally, in the appendix, "Artificial Intelligence, Biology, and Intentional
States," editor Terrell Ward Bynum argues against the concept that "intentional
states" (i.e. belief, desire, expectation) are causally dependent upon
biochemistry, and thus cannot exist within a machine.
If you're at all like me, you probably find reading philosophy can be "tough
going," and METAPHILOSOPHY is no exception. References to unfamiliar works,
and the use of unfamiliar terms occasionally necessitated my reading
passages several times before extracting any meaning from them. The topics,
however, are quite relevant and their treatment is, for the most part,
lively and interesting. With its well-written introductory article, diverse
survey of current concerns, and fairly extensive bibliography, this issue of
METAPHILOSOPHY is an excellent first source for those new to the field of
Computer Ethics.
[METAPHILOSOPHY, c/o Expediters of the Printed Word Ltd., 515 Madison Avenue,
Suite 1217, New York, NY 10022]
Bruce A. Sesnovich mcnc!rti-sel!dg←rtp!sesnovich
Data General Corp. rti-sel!dg←rtp!sesnovich%mcnc@csnet-relay.arpa
Westboro, MA "Problems worthy of attack
prove their worth by hitting back"
------------------------------
End of AIList Digest
********************
∂06-Jun-86 1321 LAWS@SRI-AI.ARPA AIList Digest V4 #143
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 6 Jun 86 13:21:21 PDT
Date: Fri 6 Jun 1986 09:39-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #143
To: AIList@SRI-AI
AIList Digest Friday, 6 Jun 1986 Volume 4 : Issue 143
Today's Topics:
Queries - Medical Expert Systems & MRS & IJCAI Awards Nominations,
Opinion - Sexism and Repression,
Seminar - Deductive Synthesis of Sorting Programs (SRI),
Conference - 1986 SGAICO Conference on 2nd-Generation Expert Systems
----------------------------------------------------------------------
Date: 5 Jun 86 02:23:00 GMT
From: hplabs!hplabsb!marvit@ucbvax.berkeley.edu (Peter Marvit)
Subject: Medical Expert System availability? (+ Mac query)
I ask for a friend who is finishing med school and is looking to "dabble"
and/or explore the possiblities...
What is a good source of information about medically oriented Expert
systems/diagnostics aids?
Are PUFF, MYCIN, EMYCIN, and the rest of the famous ones public domain?
Are they distributed by commercial folks, if at all?
Do you think that such Expert Systems [gawd, I hate the term] will be
available in the future on MacIntosh or PC level delivery systems?
Finally, I've seen the list of expert system tools for PC. Any personal
or anectodatal experiences with them (especially for those who are more
domain oriented than AI hackers)? Would someone like to start a similar
list of MacIntosh tools?
ADVthanksANCE,
Peter Marvit ARPA: marvit@hplabs.arpa (no new style, yet)
HP LABS UUCP: ...!hplabs!arpa
------------------------------
Date: 4 Jun 86 03:24:24 GMT
From: pur-ee!mendozag@ucbvax (Grado)
Subject: MRS search
I have missed the several discussions about MRS in net.ai, especially
related to obtaining sources and documentation.
Can anyone provide me with pointers as to how to obtain the
sources along with appropriate references ?
I would appreciate any help.
Victor M Grado
Box 62,
School of Electrical Engineering,
Purdue University
West Lafayette, IN 47907
(317) 494-3494
ARPA: mendozag@ee.purdue.edu
[I will send copies of the previous discussion (AIList V. 4, Nos.
11, 15, 17, 21, 32). -- KIL]
------------------------------
Date: 4 Jun 86 17:40:12 GMT
From: cad!nike!sri-spam!klee@ucbvax.berkeley.edu (Ken Lee)
Subject: Re: MRS search
Contact Professor Mike Genesereth, Computer Science Dept., Stanford Univeristy,
Stanford, CA 94305 (genesereth@su-sushi.arpa) for information. I think he's in
charge of the MRS team.
Also, if anyone saved the MRS discussion, could you send me a copy, please.
I'm new to the net and did not get it. I just finished Prof. Genesereth's
expert systems course, using MRS, and I'm interested other people's opinions
and comments on MRS.
Thanks much.
Ken Lee
arpanet: klee@sri-spam
uucp: ucbvax!klee\@sri-spam.arpa
------------------------------
Date: 5 Jun 86 20:03:08 GMT
From: CS.UCL.AC.UK!bundy%aiva.edinburgh.ac.uk@ucbvax.berkeley.edu
(Alan Bundy)
Subject: IJCAI Awards Nominations
CALL FOR NOMINATIONS FOR IJCAI AWARDS
The IJCAI Award for Research Excellence
The IJCAI Award for Research Excellence is given at each
International Joint Conference on Artificial Intelligence,
to a scientist who has carried out a program of research of
consistently high quality yielding several substantial
results. If the research program has been carried out col-
laboratively the award may be made jointly to the research
team. The first recipient of this award was John McCarthy
in 1985.
The Award carries with it a certificate and the sum of
$1,000 plus travel and living expenses for the IJCAI. The
researcher(s) will be invited to deliver an address on the
nature and significance of the results achieved and write a
paper for the conference prodeedings. Primarily, however,
the award carries the honour of having one's work selected
by one's peers as an exemplar of sustained research in the
maturing science of Artificial Intelligence.
We hereby call for nominations for The IJCAI Award for
Research Excellence to be made at IJCAI-87 in Milan. The
accompanying note on Selection Procedures for IJCAI Awards
provides the relevant details.
The Computers and Thought Award
The Computers and Thought Lecture is given at each
International Joint Conference on Artificial Intelligence by
an outstanding young scientist in the field of artificial
intelligence. The Award carries with it a certificate and
the sum of $1,000 plus travel and subsistence expenses for
the IJCAI. The Lecture is one evening during the Conferen-
ce, and the public is invited to attend. The Lecturer is in-
vited to publish the Lecture in the conference proceedings.
The Lectureship was established with royalties received
from the book Computers and Thought, edited by Feigenbaum
and Feldman; it is currently supported by income from IJCAI
funds.
Past recipients of this honour have been Terry Winograd
(1971), Patrick Winston (1973), Chuck Rieger (1975), Douglas
Lenat (1977), David Marr (1979), Gerald Sussman (1981), Tom
Mitchell (1983) and Hector Levesque (1985).
Nominations are invited for The Computers and Thought
Award to be made at IJCAI-87 in Milan. The note on Selection
Procedures for IJCAI Awards covers the nomination procedures
to be followed.
Selection Procedures for IJCAI Awards
Nominations for The Computers and Thought Award and The
IJCAI Award for Research Excellence are invited from all in
the Artificial Intelligence international community. The
procedures are the same for both awards.
There should be a nominator and a seconder, at least
one of whom should not have been in the same institution as
the nominee. The nominee must agree to be nominated. There
are no other restrictions on nominees, nominators or second-
ers. The nominators should prepare a short submission less
than 2,000 words for the voters, outlining the nominee's
qualifications with respect to the criteria for the parti-
cular award.
The award selection committee is the union of the Pro-
gram, Conference and Advisory Committees of the upcoming
IJCAI and the Board of Trustees of IJCAII, with nominees
excluded. Nominations should be submitted before December
1st, 1986 to the Conference Chair for IJCAI-87:
Dr Alan Bundy,
IJCAI-87 Conference Chair,
Department of Artificial Intelligence,
University of Edinburgh,
80 South Bridge,
Edinburgh, EH1 IHN,
Scotland. tel 44-31-225-7774 ext 242
ArpaNet: bundy@rutgers.arpa
JANet: bundy@uk.ac.edinburgh
------------------------------
Date: Mon 2 Jun 86 23:03:05-PDT
From: Lee Altenberg <ALTENBERG@SUMEX-AIM.ARPA>
Subject: Re: Backward AI
May I take this opportunity to be a "righty" and attempt to reprogram the
software of Tom Tedrick by expressing my dismay with his sexist description
of women as inexplicably trying to "poison" the efficiency of their husbands
CPU's. One should be careful about attributing to whole classes of people
the traits of one's personal friends. Also, the milk request was interpreted
as being a passive-aggressive act. This represents repressed anger which
should be brought to the surface.
------------------------------
Date: Thu 5 Jun 86 15:25:33-PDT
From: Amy Lansky <LANSKY@SRI-WARBUCKS.ARPA>
Subject: Seminar - Deductive Synthesis of Sorting Programs (SRI)
DEDUCTIVE SYNTHESIS OF SORTING PROGRAMS
Jon Traugott (JCT@SAIL)
Stanford University
11:00 AM, MONDAY, June 9
SRI International, Building E, Room EJ228 (new conference room)
Using the deductive synthesis framework developed by Manna and
Waldinger we have derived a wide variety of recursive sorting
programs. These derivations represent the first application of the
deductive framework to the derivation of nontrivial algorithms. While
the programs given were derived manually, we ultimately hope that a
computer implementation of the system (of which none currently exists)
will find similar programs automatically. Our derivations are intended
to suggest this possibility; the proofs are short in relation to
program complexity (on the order of 20 steps per procedure) and
individual derivation steps are uncontrived. We also present a new
rule for the generation of auxiliary procedures, a common "eureka"
step in program construction.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
------------------------------
Date: 5 JUN 86 18:10-N
From: SCHNEIDER%CGEUGE51.BITNET@WISCVM.WISC.EDU
Subject: Conference - 1986 SGAICO Conference on 2nd-Generation Expert Systems
SGAICO (swiss group for artificial intelligence and cognitive science)
1986 SGAICO CONFERENCE ON SECOND GENERATION EXPERT SYSTEMS
(including SGAICO General Assembly)
Luc Steels, Walter Van de Velde
University of Brussels
with exhibition
Holiday-Inn, Zurich-Regensdorf
Wednesday, October 22 1986
PURPOSE
SGAICO is organizing a conference on 2nd generation
expert systems. The purpose of the conference is to
inform participants in depth about this important new
trend in expert system technology. First generation
expert systems - which are normally based on associative
if-then rules - are subject to severe limitations. These
include reasoning power in general, explanations, natural
language capabilities, ungraceful degradation and
learning capacity. One of the reasons for these limitations
is a lack of causal knowledge about the problem
domain. Expert systems of the second generation include
models of the underlying causal dependencies that are
used in non-trivial reasoning processes.
Professor Luc Steels and his group are among the leading
researchers on Second Generation Expert Systems. Luc
Steels will give the general keynote speech. His collaborator
van de Velde will discuss a second generation
expert system in depth. A demonstration will be presented
on a Lisp machine.
PARTICIPANTS
This conference is intended for computer-scientists,
engineers, managers and all those wishing to keep up
with the latest developments in this fast-moving field.
We expect participants to have some minimal previous
knowledge of expert systems.
SGAICO GENERAL ASSEMBLY
The SGAICO general assembly (11:00-12:30) includes a
discussion of long-term development of AI in Switzerland.
We are pleased to link our meeting to a Gottlieb
Duttweiler Institute (GDI) event:
USER INTERFACES - GATEWAY OR BOTTLENECK
New Trends of Access to Information and Knowledge
October 20-21, 1986
This 4th International Symposium for Advanced Information
Technology presents new solutions to the problem of
interaction between users and computers in management
and industry. Key topics are: End User Systems, Mangement
Support Systems, Natural Language Understanding, Decision
Support Systems, Intelligent Query Systems, Process
Control, CAD, Maintenance/Diagnosis, Simulation.
PROGRAM
Speakers Luc Steels,
Walter Van de Velde,
Artificial Intelligence Laboratory,
Vrije Universiteit, Brussels
9:00 - 10:30 Keynote talk by Luc Steels:
Emerging trends in expert systems
11:00 - 12:30 SGAICO General Assembly: Long-term
development of AI in Switzerland.
Moderator: Guenter Albers
14:00 - 15:30 Walter Van de Velde: Learning and
deep reasoning in second generation
expert systems (part I)
16:00 - 17:30 Walter Van de Velde (part II)
17:45 - 19:00 Demonstrations on Lisp-Machines
19:00 Informal "get-together"
exhibition including major Artificial Intelligence
software and hardware.
ORGANIZATION
Program Committee Guenter Albers, University of Geneva
Rolf Pfeifer, University of Zurich
Michael Rosner, University of Geneva
Daniel Schneider, University of Geneva (Chairman)
Patrick Shann, University of Geneva
Fees SI members or members of a SVI/FSI organisation 110.- Sfr
non members 180.- Sfr
student rates 30.- Sfr
Special rates Limited funding is available for persons with financial
needs who wish to apply for student rates.
Registration Please fill out the registration card and mail it by
september 30 1986. You will be billed when receiving
confirmation.
Telephone of the SI/SGAICO secretariat:
(..41) 1 481 73 90 (Frau Nicolet)
Lunch Please indicate on the registration form if you plan to
have lunch at the Holiday Inn.
Accomodation For hotel registration please contact the
Tourist Office, Bahnhofplatz 15, 8023 Zurich,
Tel.: (..41) 1 211 40 00,
or the Holiday Inn, Regensdorf, Tel.: (..41) 1 840 25 20.
For further information about the program contact a program committee member
or email to Daniel Schneider (sender).
For information on the GDI congress, please contact:
Gottlieb Duttweiler Institut
Frau Kunz-Wechler
CH-8803 Ruschlikon
Tel.: (01) 461 37 16
To register contact:
SI/SGAICO
Postfach 570
CH-8027 Zurich
Switzerland
from: Daniel K.Schneider
Departement de Science Politique, Universite de Geneve
1211 GENEVE 4 (Switzerland), Tel. (..41) 22 20 93 33 ext. 2357
to VMS/BITNET: to UNIX/EAN:
BITNET: SCHNEIDER@CGEUGE51 shneider%cui.unige.chunet@CERNVAX
ARPA: SCHNEIDER%CGEUGE51.BITNET@WISCVM shneider%cui.unige.chunet@ubc.csnet
uucp: mcvax!cernvax!cui!shneider
X.400/ean: shneider@cui.unige.chunet
------------------------------
End of AIList Digest
********************
∂10-Jun-86 0025 LAWS@SRI-AI.ARPA AIList Digest V4 #144
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Jun 86 00:24:50 PDT
Date: Mon 9 Jun 1986 21:28-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #144
To: AIList@SRI-AI
AIList Digest Tuesday, 10 Jun 1986 Volume 4 : Issue 144
Today's Topics:
Literature - Report Sources & Bibliography #1
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Report Sources
Mrs. A. McCauley
Department of Computer Science
University of Manchester
Oxford Road
Manchester, M13 9PL
England
Payment should be made by sterling cheque made payable to the
University of Manchester
Technical Reports Librarian
Computer Science Department
University of Wisconsin
1210 W. Dayton St.
Madison, WI 53706
F. Renzetti IMAG B.P. 68 38402 St MARTIN D'HERES (France)
Technical Reports Secretary
Department of Computer Science
University of Melbourne
Parkville, Victoria, 3052
AUSTRALIA
(Donations of $5.00 per tech report requested)
Naomi Schulman
Publications
Computer Systems Laboratory
Stanford University
Stanford, CA 94305
Centre for Mathematics and Computer Science
Postbus 4079
1009 AB Amsterdam
The Netherlands
(Foreign payments are subject to a surcharge to cover bank, postal
and handling charges)
Ms. K. M. Garcia
Technical Librarian
Department of Computer Science
University of California, Santa Barbara
Santa Barbara, CA 93106
Computing Research Laboratory
University of Michigan
Room 1079, East Engineering Building
Ann ARbor, Michigan 48109
L. A. Stratmann
Department of Computer Science
Rice University
P. O. Box 1892
Houston, Texas 77251
Department of Computer Sciences
Technical Report Center
The University of Texas at Austin
Austin, Texas
CS.TECH@UTEXAS-20
Virginia Polytechnic Institute and State University
Department of Computer Science
562 McBryde Hall
Blacksburg, VA 24061
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Bibliography #1
%A William R. Arnold
%A John S. Bowie
%T Artificial Intelligence: A Personal Commonsense Journey
%I Prentice Hall
%D 1986
%K AT15
%X 24.95, ISBN 0-13-148877-1 219 pages
%A Luc Steels
%A John A. Campbell
%T Progress in Artificial Intelligence
%D 1985
%K AT15
%A Rodney A. Brooks
%T Programming in Common Lisp
%I John Wiley and Sons
%D 1985
%K AT15 T01
%X ISBN 0-471-818888-7
%A Ajit Narayanan
%A Noel E. Sharkey
%T An Introduction to Lisp
%I Chichester: Ellis Horwood
%D 1985
%K AT15 T01
%X ISBN 0-470-20244-0 paperback
%A Alain Bonet
%T Artificial Intelligence: Promise and Performance
%I Prentice-Hall
%D 1985
%K AT15
%X 221 pages ISBN 0-13-048869-0
%A Wendy B. Rauch-Hindin
%T Artificial Intelligence in Business, Science and Industry, Volume I:
Fundamentals
%I Prentice-Hall
%D 1985
%X 331 pages ISBN-0-134-048893-3 $34.95
%A Wendy B. Rauch-Hindin
%T Artificial Intelligence in Business, Science and Industry, Volume II:
Applications
%I Prentice-Hall
%D 1986
%X 348 pages ISBN-0-134-048901-3 $34.95
%A Tim Johnson
%T Natural Language Computing: The Commercial Applications
%I Ovum Limited
%C London
%K AI02 AT04
%A B. K. Boguraev
%A K. S. Jones
%T A Framework for Inference in Natural Language Front Ends to Databases
%I University of Cambridge Computer Laboratory
%R Report No. 64
%D 1985
%K AI02 AA09
%A F. J. Damerau
%T An Interactive Customization Program for a Natural Language Database
Query System
%I IBM Research Division
%R Report No. 10411
%D 1984
%K AI02 AA09
%A F. J. Damerau
%T Problems and Some Solutions in Customization of Natural Language Data
Base Front Ends
%I IBM Research Divison
%D 1984
%R Report No. 10872
%K AI02 AA09
%A H. Enomoto
%T TELL: a Natural Language Based Software Development System
%I Institute for New Generation Computer Technology
%D 1984
%R Report No. 67
%K AI02
%A R. E. Frederking
%T Syntax and Semantics in Natural Language Parsers
%I Carnegie-Melon University,
Department of Computer Science
%R Report No. 85-133
%D 1985
%K AI02
%A P. S. Jacobs
%T PHRED: A Generator for Natural Language Interfaces
%I University of California, Berkeley Computer Science Division
%R Report No. 85-198
%D 1985
%K AI02
%A D. E. Johnson
%T Design of a Robust, Portable Natural Language Interface Grammar
%I IBM Research Division
%R Report No. 10867
%D 1984
%K AI02
%A J. K. Kalita
%T Generating Summary Responses to Natural Language Database
%I University of Saskatchewan
%R Report No. 84-9
%D 1984
%K AI02 AA09
%A E. Mays
%T A Modal Temporal Logic for Reasoning About Changing Database with
Applications to Natural Language Question Answering
%I University of Pennsylvania, Moore School of Electrical Engineering.
Department of Computer Science
%D 1985
%R Report No. 85-01
%K AI02 AI10 AA09
%A B. Neuamnn
%T Natural Language Descriptions of Time-Varying Scenes
%I Universitaet Hamburg. Fachbereich Informatik
%R Report No. 105
%D 1984
%K AI02 AI06
%A E. Orlowska
%T The Montague Formalization of Natural Language
%I Polish Academy of Sciences, Institute of Computer Sciences
%R Report No. 548
%D 1984
%K AI02
%A S. R. Petrick
%T Natural Language Database Query Systems
%I IBM Research Division
%R Report No. 10508
%D 1984
%K AI02
%A P. Saint-Dizier
%T An Approach to Natural Language Semantics in Logic Programming
%I Institute National de Recherce en Informatique et en Automatique
%R Report No. 389
%K AI02 AI10
$A L. F. Rau
%T The Understanding and Generation of Ellipses in a Natural Language
Systems.
%I University of California Berkeley. Computer Science Division
%D 1984
%R Report No. 85-227
%K AI02
%T Dynamic Computer Simulation of Multiple Closed-Chain Robotic
Mechanisms
%A Se-Young Oh
%A David E. Orin
%B BOOK28
%K AI07
%T On Dynamic Models of Robot Force Control
%A Steven D. Eppinger
%A Warren P. Seering
%B BOOK28
%K AI07
%T Arm Signature Identification
%A Henry W. Stone
%A Arthur C. Sanderson
%A Charles P. Neuman
%T The Effects of Dynamic Models on Robot Control
%A M B. Leahy Jr
%A K. P. Valavanis
%A G. N. Saridi
%B BOOK28
%K AI07
%T Experimental Determination of the Effect of Feedforward Control on
Trajectory Tracking Errors
%A Chae H. An
%A Christopher G. Atkeson
%A John M. Hollerbach
%B BOOK28
%K AI07
%T Low Level Control for the Utah/MIT Dextrous Hand
%A K. B. Biggers
%A S. C. Jacobsen
%A G. E. Gerpheide
%B BOOK28
%K AI07
%T Hybrid Position/Force Control of Multi-Arm Cooperating
Robots
%A Sama Hayati
%B BOOK28
%K AI07
%T Solving a Two Dimensional Path Planning Problem Using Topographical
Knowledge of the Environment and Capability Constraints
%A R. F. Richbourg
%B BOOK28
%K AI07
%T Implementing a Force Strategy for Object Re-orientation
%A Ronald S. Fearing
%B BOOK28
%K AI07
%T On-Line Pathfinding in Multi-Robot Systems including Obstacles -
%A E. Freund
%A H. Hoyer
%B BOOK28
%K AI07
%T Video Image Stereo Matching Using Phase-Locked Techniques
%A W. Thomas Miller III
%B BOOK28
%K AI07 AI06
%T An Approach to 3-D Object Identification Using Range Images
%A David B. Shu
%A C. C. Li
%A Y. N. Sun
%B BOOK28
%K AI07 AI06
%T Sensing and Describing 3-D Structure
%A Peter K. Allen
%B BOOK28
%K AI07 AI06
%T A New Decomposition for Three-Dimensional
Contours Based on Curvature and Torsion
%A N. Kehtarnavaz
%A R. J. P.de Figueiredo
%B BOOK28
%K AI07 AI06
%T Soft Configuration in Automated Insertion
%A C. B. Lofgren
%B BOOK28
%K AI07 AA26 AA04
%T Part Dispatch in Multistage Card Lines
%A Ram Akella
%B BOOK28
%K AA26
%T Throughput Maximization in Short Cycle Automated Manufacturing
%A M. H. Han
%B BOOK28
%K AA26
%T Job Scheduling Model for a Flexible Manufacturing Machine
%A C. S. Tang
%B BOOK28
%K AA26
%T Graphical Simulation and Automatic Verification of NC Maching Programs
%A U. Sungurtekin
%A H. B. Voecker
%B BOOK28
%K AA26
%T Real-time Verification of Multi-Axis NC Machining
Programs with Raster Graphics
%A W. P. Wang
%A K. K. Wang
%B BOOK28
%K AA26
%T Real-time Error Compensation System for A Computerized
Numerical Control Turning Center
%A Alkan Donmez
%A Kang Lee
%A C. Richard Liu
%A Moshe M. Barash
%B BOOK28
%K AA26
%T Adaptive Control of Robot Manipulators - A Review
%A T. C. Hsia
%B BOOK28
%K AI07
%T Automatic Generation of the Dynamic Equations of the Robot
Manipulators using a Lisp Program
%A Albert Izaguirre
%A Richard Paul
%B BOOK28
%K AI07 T01
%A M. A. Peskin
%A A. C. Sanderson
%T Manipulation of a Sliding Object
%B BOOK28
%K AI07
%A Rajko Tomovic
%A George A. Bekey
%T Robot Control by Reflex Actions
%B BOOK28
%K AI07
%A M. Togai
%A O. Yamano
%T Learning Control and its Optimality: Analysis and its Applications to
Controlling Industrial Robots
%B BOOK28
%K AI07 AI04
%A Ataru Nakamura
%A Kang G. Shin
%A Neil D. McKay
%T Automatic Generation of Trajectory Planners for Industrial Robots
%B BOOK28
%K AI07
%A G. N. Saridis
%A K. P. Valavahnis
%T Mathematical Formulation of the Organization Level of an Intelligent Machine
%B BOOK28
%K AI07
%A Morikazu Takegaki
%A Tadashi Ohi
%T An Advanced Design Support System for Intelligent Robots
%B BOOK28
%K AI07
%A Ricard Cassinis
%T Automatic Resource Allocation in Industrial Multirobot Systems
%B BOOK28
%K AI07
%A Michael J. Swain
%A Joseph L. Mundy
%T Experiments in Using a Theorem Prover to Prove and Develop Geometrical
Theorems in Computer Vision
%B BOOK28
%K AI06 AI11 AA13 AI14
%A W. Eric L. Grimson
%T Disambiguating Sensory Interpretations Using Minimal Sets of Sensory
Data
%B BOOK28
%K AI06
%A H. S Yang
%A A. C. Kak
%T Determination of the Identity, Position and Orientation of the Topmost Object
in a Pile
%B BOOK28
%K AI06
%A Judith F. Silverman
%A David B. Cooper
%T Unsupervised Estimation of Polynomial Approximations to Smooth Surfaces
in Images or Range Data
%B BOOK28
%K AI06
%A P. J. Englert
%A P. K. Wright
%T Applications of Artificial Intelligence in the Design of Fixtures
for Automated Manufacturing
%B BOOK28
%K AA26
%A Patrick Fitzhorn
%A Wade O. Troxell
%T A Dynamic Approach to the Robotic Design Cycle
%B BOOK28
%K AI07
%A M. Dado
%A A. H. Soni
%T A Generalized Approach for Forward and Inverse Dynamics of Elastic
Manipulators
%B BOOK28
%K AI07
%A R. Marino
%A S. Nicosia
%A A. Tornambe
%T Dynamic Modelling of Flexible Robot Manipulators
%B BOOK28
%K AI07
%A Gregory P. Starr
%T Edge Following with a PUMA 560 Manipulator Using VAL-II
%B BOOK28
%K AI07
%A M. Silva
%A L. Montano
%A P. Pardos
%T Terminal Controllers for Robots: Shooting and Optimal Control
%B BOOK28
%K AI07
%A Christopher Clark
%A Lawrence Stark
%T Cooperative Robot Control
%B BOOK28
%K AI07
%A Gerhard Hirzing
%A J. Dietrich
%T Multisensory Robots and Sensorbased Path Generators
%B BOOK28
%K AI07 AI06
%A E. G. Harokops
%T Optimal Learning Control of Mechanical Manipulators in Repetitive Motions
%B BOOK28
%K AI07 AI04
%A John Wen
%A Alan Desrochers
%T Sub-Time-Optimal Control Strategies for Robotic Manipulators
%B BOOK28
%K AI07
%A M. B. Leahy
%A George N. Saridis
%T The RAL Hierarchical Control System
%B BOOK28
%K AI07
%A Kang G. Shin
%A Neil D. McKay
%T Minimum Time Trajectory Planning for Industrial Robots with General
Torque Constraints
%B BOOK28
%K AI07
%A H. Kazerooni
%A P. E. K. Houpt
%A T. B. Sheridan
%T Robust Compliant Motion for Manipulators, Part I: The Fundamental Concept
of Compliant Motion; Part II: Design Methods
%B BOOK28
%K AI07
%A Mary M. Moya
%A William M. Davidson
%T Sensor Driven Factor Tolerant Control of a Maintenance Robot
%B BOOK28
%K AI07
%A Richard J. Grommes
%A Michael P. Hennessey
%A Warren J. Dick
%T Adaptive Intervehicle Positioning for Robotic Material Transfer
%B BOOK28
%K AI07
%A S. Thunborg
%T A Remote Maintenance Robot System for a Pulsed Nuclear Reactor
%B BOOK28
%K AI07
%A Nobuyoshi Yokobori
%A Pen-shu Yeh
%A Azriel Rosenfield
%T Sub-Pixel Geometric Correction of Pictures by Calibration and
Decalibration
%B BOOK28
%K AI06
%A Ichiro Masaki
%T Modular Multi-Resolution Vision Processor
%B BOOK28
%K AI06
%A Ronald Lumia
%T Rapid Hidden Feature Elimination Using an Octree
%B BOOK28
%K AI06
%A Nien-hu Chao
%A E. N. Schiebel
%T Inspection Assistant - A Knowledge-Based System for Piece Part Inspection
%B BOOK28
%K AI06
%A Agostino Pl. M. Villa
%A Roberto Mosca
%A Giuseppe Murari
%T Expert Control Theory: A Key for Solving
Production Planning Control Problem in Flexible Manufacturing
%B BOOK28
%K AA26
%A R. Ippolito
%A S. Rosseto
%A M. Vallauri
%A A. P. M. Villa
%T The Emergence of Artificial Intelligence Applications in
Manufacturing
%B BOOK28
%K AA26
%A Caludio Boer
%T Expert Control System Requirements for Manufacturing Process Control
%B BOOK28
%K AA26
%A Cynthia K. Whitney
%T Building "Expert Systems" When No Experts Exist
%B BOOK28
%K AA26 AI01
%A Alan A. Desrochers
%A Christopher M. Seaman
%T A Projection Method for Simplifying Robot Manipulator Models
%B BOOK28
%K AI07
%A Brian Armstrong
%A Oussama Khatib
%A Joel Burdick
%T The Explicit Dynamic Model and Intertial Parameters of the PUMA 560 ARM
%B BOOK28
%K AI07
%A M. B. Leahy Jr.,
%A L. M. Nugent
%A K. P. Valavanis
%A G. N. Saridis
%T Efficient Dynamics for a PUMA 600
%B BOOK28
%K AI07
%A R. B. Kelley
%T Vertical Integration for Robot Assembly Cells
%B BOOK28
%K AI07
%A S. A. Cameron
%A R. K. Culley
%T Determining the Minimum Translational Distance Between Two Convex
Polyhedra
%B BOOK28
%K AI07 O06
%A Walter Meyer
%T Distance Between Boxes: Applications to Collision Detection and
Clipping
%B BOOK28
%K AI07
%A R. Alami
%A H. Chochon
%T NNS, a Knowledge-Based On-Line System for an Assembly WorkCell
%B BOOK28
%K AI07
%A A. Rovetta
%A G. Frosi
%T Logical Structure for Assembly with Robot
%B BOOK28
%K AI07
%A J. R. Stenstrom
%A C. I. Connolly
%T Building Wire Frames from Multiple Range Views
%B BOOK28
%K AI07 AI06
%A X. Zhuang
%A T. S. Huang
%T From Two-View Motion Equations to Three-Dimensional Motion Parameters
and Surface Structure: A Direct and Stable Algorithm
%B BOOK28
%K AI07
%A Giulio Sanino
%A Massimo Tistarelli
%T Analysis of Object Motion and Camera Motion in Real Scenes
%B BOOK28
%K AI06
%A J. Amat
%A A. Casals
%A V. Llario
%T Improving Accuracy and Resolution of a Motion Stereo Vision System
%B BOOK28
%K AI06
%A B. Chernuschi-Frias
%A D. B. Cooper
%A P. N. Belhumeur
%T 3-D Object Position Estimation and Recognitions Based on Parameterized
Surfaces and Multiple Views
%B BOOK28
%K AI06
%A G. M. Acaccia
%A R. C. Michelini
%A R. M. Molfiono
%A P. A. Piaggio
%T X-SIFIP: A Knowledge-based Special Purpose Simulator for the
Development of Flexible Manufacturing Cells
%B BOOK28
%K AA26
%A Andrew Kusiak
%T FMS Scheduling: A Crucial Tool in an Expert Control Structure for
Production Planning
%B BOOK28
%K AA26 AI01
%A Jon D. Erickson
%A Aaron Cohen
%T Autonomous Robotic Aspects of the Space Station Program
%B BOOK28
%K AI07 AA27
%A W. Kohn
%A K. Healy
%T On-Line Task Interpreter for Astrobot
%B BOOK28
%K AI07 AA27
%A Scott Y. Harmon
%A Douglas W. Grange
%A Walter A. Aviler
%T Techniques for Coordinating Autonomous Robots
%B BOOK28
%K AI07
%A Mark A. Bronez
%A Margaret M. Clarke
%A Alberta Quinn
%T Requirements Development for a Free-Flying Robot -- The Robin
%B BOOK28
%K AI07 AA19
%A Jeffrey S. Schoenwald
%A Michael S. Balck
%A Gregory A. Arnold
%A Timonthy A. Allison
%T Improved Robot Trajectory from Acoustic Range Servo Control
%B BOOK28
%K AI07
%A Ljubomir T. Grujic
%T Tracking Analysis for Non-Stationary Non-Linar Discrete-Time System
%B BOOK28
%K AI07
%A Tomoaki Kubo
%A George Anwar
%A Masayoshi Tomizuka
%T Applications of Nonlinear Friction Compensation to Robot Arm Control
%B BOOK28
%K AI07
%A Daniel E. Whitney
%T Real Robots Don't Need Jigs
%B BOOK28
%K AI07 AA26
%A Margo K. Apostolos
%T Robot Choreography: An Aesthetic Application in User Acceptance of
a Robot Arm
%B BOOK28
%K AI07 AA25 O01
%A Stuart G. Stanley
%A Mansour Eslami
%T On Design of an Educational Robot
%B BOOK28
%K AI07 AT18
%A P. J. Becker
%T Sensor Information Processing in Robot Control Systems
%B BOOK28
%K AI07
%A Gerard Medioni
%A Yoshio Yasumoto
%T Corner Detection and Curve Representation Using Cubic B-Splines
%B BOOK28
%K AI06
%A Xueyin Lin
%A William G. Lee
%T SDFS: A New Strategy for the Recognition of Object Using Range
Data
%B BOOK28
%K AI06
%A Bir Bhanu
%A John C. Ming
%T Recognition of 2-D Occluded Objects using a Cluster-Structure Paradigm
%B BOOK28
%K AI06
%A Michael Magee Mitchell Nathan
%T A Theorem Proving Based Pattern Recognition System
%B BOOK28
%K AI06 AI11 AI14
%A Patricia McConail
%T Automation and CIMS in the Esprit Program
%B BOOK28
%K AA26
%A Ulrich Rembold
%A M. Vojnovic
%T Operational Control for Robot Systems Integration into CIM
%B BOOK28
%K AA26 AI07
%A Lyle M. Jenkins
%T Telerobotic Work System- Space Robotics Application
%B BOOK28
%K AI07 AA27
%A David L. Akin
%T Parametric Testing of Space Teleoperators through Neutral Buoyancy
Simulation
%B BOOK28
%K AI07 AA27
%A T. Sheridan
%T Human Supervisory Control of Robot System
%B BOOK28
%K AI07 O01
%A Jack Pennington
%T (I) Space Telerobotics: A Few More Hurdles
%B BOOK28
%K AI07 AA27
%A Fredrik Dessen
%T Coordinating Control of a Two Degrees of Freedom Universal Joint Structure
Driven by Three Servos
%B BOOK28
%K AI07
%A Chang-huan Liu
%A Yen-ming Chen
%T Multimicroprocessor-based Cartesian Space Control
%B BOOK28
%K AI07
%A Subbiah Mahalingam
%A Anand M. Sharan
%T The Optimal Balancing of the Robotic Manipulators
%B BOOK28
%K AI07
%A N. Sreenath
%A P. S. Krishnaprasad
%T DYNAMAN: A Tool for Manipulator Design and Analysis
%B BOOK28
%K AI07
%A Sanjeev R. Maddila
%T Motion Planning Algorithm for a Ladder Among Rectangular
Obstacles
%B BOOK28
%K AI07 O06
%A Michael Erdmann
%A T. Lozano-Perez
%T On Multiple Moving Objects
%B BOOK28
%K AI07
%A Michael Brady
%T Recent Advances Toward a Surface Primal Sketch
%B BOOK28
%K AI06
%A Martial Hebert
%A Takeo Kanade
%T Range Data Analysis of Outdoors Scenes
%B BOOK28
%K AI06
%A N. Ayache
%A O. D. Faugeras
%A B. Faverjon
%A F. Lustman
%T Building Visual Maps by Combining Noisy Stereo Measurements
%B BOOK28
%K AI06
%A T. Poggio
%A Michael Drumheller
%T Parallel Stereo
%B BOOK28
%K AI06 H03 Thinking Machines
%A S. Harmon
%A G. Bianchini
%A B. Pinz
%T Sensor Data Fusion Through a Distributed Blackboard
%B BOOK28
%K AI06
%A J. Crowley
%T Generalized Surface Patches: A Representation for Composite
Surface Modeling
%B BOOK28
%K AI06
%A K. Jo. Overton
%T Range Vision, Force, and Tactile Sensory Integration:
Issues and an Approach
%B BOOK28
%K AI06 AI07
%A H. Durrant-Whyte
%T Integration of Distributed Sensor Observation
%B BOOK28
%K AI06 AI07
%A Waj-Joon Lee
%A David E. Orin
%T The Kinematics of Legged Locomotion Over Uneven Terrain
%B BOOK28
%K AI07
%A U. Ozguner
%T Control of Quadruped Trot
%B BOOK28
%K AI07
%A Chi-Keng Tsai
%A David E. Orin
%T Using Proximity Sensing in Robot Leg Control
%B BOOK28
%K AI07
%A Jagdish Joshi
%A Alan A. Desrochers
%T Modeling and Control of a Mobile Robot Subject to
Disturbances
%B BOOK28
%K AI07
%A Hiroaki Kobayashi
%T Grasping and Manipulation of Objects by Articulated Hands
%B BOOK28
%K AI07
%A Steve Jacobsen
%A E. K. Iversen
%A D. F. Knutti
%A R. T. Johnson
%A K. B. Biggers
%T Machinery Issues in End Effector Design
%B BOOK28
%K AI07
%A Mark R. Cutosky
%A Pual K. Wright
%T Modeling Manufacturing Grips and Correlations with the Design of Robotic
Hands
%B BOOK28
%K AI07 AA26
%A J. C. Becker
%A N. V. Thakor
%A K. G. Gruben
%T A Study of Human Hand Tendon Kinematics with Applications to Robot Hand
Design
%B BOOK28
%K AI07
%A Michael A. Erdman
%A Matthew T. Mason
%T An Exploration of Sensorless Manipulation
%B BOOK28
%K AI07
%A Randy C. Brost
%T Automatic Grasp Planning in the Presence of Uncertainty
%B BOOK28
%K AI07 O04 AI09
%A Juan Juan
%A R. P. Paul
%T Model for Automatic Programming of Fine-Motion in Assemblies
%B BOOK28
%K AI07
%A Bruce R. Donald
%T Robot Motion Planning with Uncertainty in the Geometric Models of the
Robot Environment: A Formal Framework for Error Detection and Recovery
%B BOOK28
%K AI07 O04 AI09
%A Tsuji
%T Recent Advances Toward the Realization of a Flexible Mobile Vehicle
%B BOOK28
%K AI07 AA19
%A Allen M. Waxman
%A Jacqueline Le Moigne
%A Larry S. Davis
%A Eli Liang
%A Tharakesh Siddalingaiah
%T A Visual Navigation Systems
%B BOOK28
%K AI07 AI06
%A Y. Y. Huang
%A Z. L. Cao
%A E. L. Hall
%T Region Filling Operation for Mobile Robot Using Computer Graphics
%B BOOK28
%K AI07 AA19
%A Richard Wallace
%A Kichie Matsuzaki
%A Yoshimasa Goto
%A Jon Webb
%A Jill Crisman
%A Takeo Kanade
%T Progress in Robot Road Following
%B BOOK28
%K AI07 AA19
%A C. Thorpe
%A S. Shafer
%A A. Stentz
%T An Architecture for Data Fusion
%B BOOK28
%K AI06 sensors
%A M. Shimojo
%A O. Khatib
%T Intelligent Fusion of Tactile Sensor Data
%B BOOK28
%K AI07 AI06
%A D. Morley
%A S. Chiu
%A J. Martin
%T Sensor Data Fusion on a Parallel Processor
%B BOOK28
%K AI07 H03 AI06
%A E. W. Kent
%A M. Shneier
%A T. H. Hong
%T Building Representations from Fusions of Multiple Views
%B BOOK28
%K AI07 AI06
%A E. Bensana
%A M. Correge
%A G. Bel
%A D. Dubois
%T An Expert System Approach to Industrial Job Shop Scheduling
%B BOOK28
%K AI07 AA26 AA05 AI01
%A J. Erschler
%A P. Esquirol
%T Decision Aid in Job Shop Scheduling: A Knowledge Based Approach
%B BOOK28
%K AI07 AA26
%A Alexandre M. Parodi
%A John J. Nitao
%A Louis S. McTamaney
%T An Intelligent System for an Autonomous Vehicle
%B BOOK28
%K AI07 AA19
------------------------------
End of AIList Digest
********************
∂10-Jun-86 0313 LAWS@SRI-AI.ARPA AIList Digest V4 #145
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Jun 86 03:13:29 PDT
Date: Mon 9 Jun 1986 21:36-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #145
To: AIList@SRI-AI
AIList Digest Tuesday, 10 Jun 1986 Volume 4 : Issue 145
Today's Topics:
Literature - Bibliography #2
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Bibliography #2
%A C. S. G. Lee
%A P. R. Chang
%T Efficient Parallel Algorithm for Robot Inverse Dynamics Computation
%B BOOK28
%K AI07 H03
%A Shgaheen Ahmad
%T Real-Time Multi-processor Based Robot Control
%B BOOK28
%K AI07
%A V. Dupourque
%A H. Guiot
%A O. Ischacian
%T Towards Multi-Processor and Multi-Robot Controllers
%B BOOK28
%K AI07 H03
%A John M. Hollerbach
%A John E. Wood
%T Finger Force Computation without the Grip Jacobian
%B BOOK28
%K AI07
%A John Jameson
%A Larry Leifer
%T Quasi-Static Analysis: A Method for Predicting Grasp Stability
%B BOOK28
%K AI07
%A Van-Duc Nguyen
%T The Synthesis of Stable Grasps in the Plan
%B BOOK28
%K AI07
%A James Barber
%A Richard A. Volz
%A Rajiv Desai
%A Ronitt Rubenfeld
%A Brian Schipper
%A Jan Wolter
%T Automatic Two-Fingered Grip Selection
%B BOOK28
%K AI07
%A Dinesh K. Pai
%A M. C. Leu
%T INEFFABELLE - An Environment for Interactive Computer
Graphic Simulations of Robotic Applications
%B BOOK28
%K AI07
%A S. A. Hutchinson
%A A. C. Kak
%T FProlog: A Language to Integrate Logic and Functional Programming
for Automated Assembly
%B BOOK28
%K AI07 T02 AI10
%A Mitchell S. Steffen
%A Timothy J. Greene
%T An Application of Hierarchical Planning and Constraint-directed Search
to Scheduling Parallel Procesors
%B BOOK28
%K AI07 H03 AI09
%A John T. Fedema
%A Shaheen Ahmad
%T Determining a Static Robot Grasp for Automated Assembly
%B BOOK28
%K AI07
%A Thomas F. Knoll
%A Ramesh C. Jain
%T Recognizing Partially Visible Objects Uising Feature Indexed Hypotheses
%B BOOK28
%K AI07 AI06
%A Stephen J. Gordon
%A Warren P. Seering
%T Accuracy Issues in Measuring Quantized Images of Straight Line Features
%B BOOK28
%K AI07
%A C. K. Cowan
%A R. C. Bolles
%A M. J. Hannah
%A J. A. Herson
%T Edge Chain Analysis for Object Verification
%B BOOK28
%K AI06
%A Rashpal S. Ahluwalia
%A Lynn M. Fogwell
%T A Modular Approach to Visual Servoing
%B BOOK28
%K AI07
%A Melvin Montemerlo
%T NASA's Robotics and Automation Technology Development Program
%B BOOK28
%K AI07 AA27
%A Chung Fong
%A A. K. Bejczy
%A R. Dotson
%T Distributed Microcomputer Control System for Advanced Teleoperators
%B BOOK28
%K AI07 AA27 H03
%A Bernard Espiau
%T An Integrated Experiment in Advanced Nuclear Teleoperation
%B BOOK28
%K AI07
%A Fumio Miyazaki
%A shigaki Matsubayashi
%A Takashi Yoshimi
%A Suguru Arimoto
%T A New Control Methodology toward Advance Teleoperation of Master
Salve Robot Systems
%B BOOK28
%K AI07
%A K. Youcef-Toumi
%A H. Asada
%T The Design of Open Loop Manipulator Arms with Decoupled and Configuration
Invariant Inertia Tensors
%B BOOK28
%K AI07
%A B. W. Mooring
%A T. J. Pack
%T Determination and Specification of Robot Repeatability
%B BOOK28
%K AI07
%A Vincent Hayward
%T Fast Collision Detection Scheme by Recursive Decomposition of a Manipulator
Workspace
%B BOOK28
%K AI07
%A Vladimir J. Lumesky
%T Continuous Path Planning for a Three-Dimensional Cartesian Robot Arm
%B BOOK28
%K AI07 AI09
%A Martin Herman
%T Fast, Three-Dimensional, Collision-Free Motion Planning
%B BOOK28
%K AI07 AI09
%A R. K. Culley
%A K. G. Kempf
%T A Collision Detection Algorithm Based on Velocity and Distance
Bounds
%B BOOK28
%K AI07
%A Richard E. Smith
%A Maria Gini
%T Robot Tracking and Control Issues in an Intelligent Error Recovery System
%B BOOK28
%K AI07
%A Marco Somalvico
%T The Role of White Collar Robots Real-Time Expert Systems with Multi-Media
Sensory Systems
%B BOOK28
%K AI07 AI01 O03
%A V. Dupourque
%T Using Abstraction Mechanisms to Solve Complex Tasks Programming in Robotics
%B BOOK28
%K AI07
%A M. L. Hornick
%A B. Ravani
%T Data Structure and Database Design for Model Driven Robot Programming
%B BOOK28
%K AI07
%A John W. Roach
%A Jeff S. Wright
%T Spherical Dual Images: A 3D Representation Method for Solid Objects
that Combines Dual Space and Gaussian Spheres
%B BOOK28
%K AI07
%A Erick P. Krotkov
%A Jean Paul Martin
%T Range From Focus
%B BOOK28
%K AI06
%A Christopher Bania
%A James C. Lin
%T Theory and Implementation of a High Capacity 3-D Recognition System
%B BOOK28
%K AI06
%A A. Robert de Saint Vincent
%T A 3D Perception System for the Mobile Robot Hilare
%B BOOK28
%K AI07 AA19 AI06
%A Michael J. Smith
%T Sociotechnical Considerations in Robotics and Automation
%B BOOK28
%K AI07 O05
%A George Burri
%A Martin G. Helander
%T Case Studies of Human Factors/Ergonomic Design in Robotics and
Automation at IBM
%B BOOK28
%K AI07 O01
%A Olov Ostberg
%T A European Perspective on Human Factors Aspects of Robotics and
Automation
%B BOOK28
%K AI07 O01 GA03
%A Dennis Bering
%T Supervisory Interface with Expert Systems for Semi-Autonomous Walking
Robots
%B BOOK28
%K AI07 O01 AI01
%A S. V. Nageswara Rao
%A S. S. Iyengar
%A C. C. Jorgenson
%A C. R. Weisbin
%T Concurrent Algorithms for Autonomous Robot
Navigation in an Unexplored Terrain
%B BOOK28
%K AI07 AI06 AA19 H03
%A J. L. Olivier
%A F. Ozguner
%T A Navigation Algorithm for an Intelligent Vehicle with a Laser Rangefinder
%B BOOK28
%K AI07 AI06 AA19
%A Alberto Elfes
%T A Sonar-Based Mapping and Navigation System
%B BOOK28
%K AI07 AI06 AA19
%A Shriar Negahdaripour
%T Direct Passive Navigation: Analytical Solutions for Planes and
Curves Surfaces
%B BOOK28
%K AI07 AI06
%A Kye Y. Lim
%A Masour Eslami
%T Robust Adaptive Controller Designs for Robot Manipulator Systems
%B BOOK28
%K AI07
%A Steven Fortune
%A Gordon Wilfgong
%A Chee Yap
%T Coordinated Motion of Two Robot Arms
%B BOOK28
%K AI07
%A Pierre Tournassoud
%T A Strategy for Obstacle Avoidance and its Application to Multi-Robot
Systems
%B BOOK28
%K AI07
%A Yuan F. Zheng
%A Fred R. Sias,\ Jr.
%T Multiple Robot Arms in Assembly
%B BOOK28
%K AI07 AA26
%A Sohail S. Houssani
%A David E. Jakopac
%T Multiple Manipulators and Robotic Workcell Coordination
%B BOOK28
%K AI07 AA26
%A Matt Barth
%A Srinivasan Parthasarathy
%A Jing Wang
%A Evelyn Hu
%A Susan Hackwood
%A Gerardo Beni
%T A Color Vision System for Microelectronics: Application to Oxide
Thickness Measurements
%B BOOK28
%K AI07 AI06
%A Ren C. Luo
%A Wen-Hsiang Tsai
%T Object Recognition Using Tactile Image Array Sensors
%B BOOK28
%K AI07 AI06
%A Kenneth J. Overton
%A Vivek V. Badami
%T Tactile Sensors for Robotic Touch
%B BOOK28
%K AI07 AI06
%A M. R. Driels
%T Pose Estimation Using Tactile Sensor Data for Assembly Operation
%B BOOK28
%K AI07
%A J. Schneiter
%A T. B. Sheridan
%T Optimal Strategy for Object Recognition by Tactile Sensing
%B BOOK28
%K AI07
%A P. Dario
%A M. Bergamasco
%A A. Fiorillo
%A R. Di Leonardo
%T Geometrical Optimization and Design Criteria for Tactile Sensing Patterns
%B BOOK28
%K AI07
%A S. A. Stansfield
%T Primitives, Features and Exploratory Procedures: Building a Robot Tactile
Perception System
%B BOOK28
%K AI07 AI06
%A R. E. Ellis
%T A Multiple-Scale Measure of Static Tactile Texture
%B BOOK28
%K AI07
%A David Siegel
%A Inaki Garabieta
%A John M. Hollerbach
%T An Integrated Tactile and Thermal Sensor
%B BOOK28
%K AI07
%A J. Vranish
%T (I) Magneto-Inductive Skin for Robots
%B BOOK28
%K AI07
%A T. Tsumura
%T Survey of Automated Guided Vehicle Use in Japanese Factories
%B BOOK28
%K AI07 GA01 AA26 AA19
%A T. Tsumura
%A M. Hashimoto
%T Positioning and Guidance of Ground Vehicle by use of Laser and
Corner Cube
%B BOOK28
%K AI07 AA19
%A K. Nishide
%A M. Hanawa
%T Automatic Position Findings of Vehicle by means of Laser
%B BOOK28
%K AI07 AA19
%A T. Takeda
%T Automated Vehicle Guidance using Video-Camera/spot Mark System
%B BOOK28
%K AI07 AA19
%A Kenneth Salisbury
%T Teleoperator Hand Design Issues
%B BOOK28
%K AI07
%A Jeffrey R. Kerr
%T Special Grasping Configurations with Dextrous Hands
%B BOOK28
%K AI07
%A Van-Duc Nguyen
%T Constructing Force-Closure Grasps
%B BOOK28
%K AI07
%A Peter W. Taylor
%T Design and Implementation of a Multi-Variable Programmable Controller
for a 9-axis General Purpose Gripper
%B BOOK28
%K AI07
%A J. Y. S. Luh
%A Y. F. Zheng
%T Compliance and Coordinated Control of Two Moving Robots
%B BOOK28
%K AI07
%A O. Khatib
%T A Unified Approach for Motion and Force Control: The Operational Space
Formulation
%B BOOK28
%K AI07
%A J. J. E. Slotine
%T Robustness and Adaptation in Compliant Motion Control
%B BOOK28
%K AI07
%A Tsuneo Yashikawa
%T Dynamic Hybrid Position/Force Control of Robot Manipulators:
Description of Hand Constraints and Calculation of Joint Driving Force
%B BOOK28
%K AI07
%A Freidrich Pfeiffer
%A Ranier Johanni
%T A Concept for Manipulator Trajectory Planning
%B BOOK28
%K AI07
%A Bernard Faverjon
%T Object Level Programming of Industrial Robots
%B BOOK28
%K AI07
%A Bruce H. Krogh
%A Charles E. Thorpe
%T Integrated Path Planning and Dynamic Steering Control for Autonomous
Vehicles
%B BOOK28
%K AA19
%A D. Gaw
%A A. Meystel
%T Minimum Time Navigation of an Unmanned Mobile Robot in a 2 1/2 D World
with Obstacles
%B BOOK28
%K AA19 AI09
%A A. Meystel
%A A. Guez
%A G. Hillel
%T Planning of Minimum Time Motion Among Obstacles
%B BOOK28
%K AI07 AI09 AA19
%A J. Bradley Chen
%A Ronald S. Fearing
%A Brian S. Armstrong
%A Joel W. Burdick
%T NYMPH: A Multiprocessor for Manipulation Applications
%B BOOK28
%K AI07 H03
%A Christopher G. Atkenson
%A Joe McIntyre
%T Robot Trajectory Learning Through Practice
%B BOOK28
%K AI07 AI04
%A Sanjiv Singh
%A Meghanad D. Wagh
%T Robot Path Planning Using Intersecting Convex Shapes
%B BOOK28
%K AI07 AI09
%A D. M. Lyons
%T Tagged Potential Fields: An Approach to Specification of Complex Manipulator
Configurations
%B BOOK28
%K AI07
%A B. John Oommen
%A Irwin Reichstein
%T On the Problem of Translating an Elliptic Object Through a Workspace of
Elliptic Obstacles
%B BOOK28
%K AI07
%A James H. Graham
%A John H. Meegher
%A Stephen J. Derby
%T A Safety and Collision Avoidance System for Industrial Robots
%J IEEE Transactions on Industry Applications
%V 22
%N 1
%D JAN-FEB 1986
%K AI07
%A K. Piasecki
%T On the Bayes Formula for Fuzzy Probability Measures
%J Fuzzy Sets and Systems
%V 18
%N 2
%D MAR 1986
%K O04
%A I. A. Kalynev
%T A Decentralized System for Planning and Controlling the Activity
of a Team of Mobile Robots
%J Cybernetics
%V 21
%N 4
%D JUL-AUG 1984
%P 533-538
%K AI07 AI09
%A B. R. Boyce
%T Questions Natural Language Examples in Caduceus
%J OnLine
%V 10
%N 2
%D MAR 1986
%P 54-76
%K AA01 AI01 AI02 AA14
%A B. S. Thompson
%A C. K. Sung
%T The Design of Robots and Intelligent Manipulators Using Modern Composite
Materials
%J MAG24
%P 471-482
%K AI07
%A S. M. Song
%A K. J. Waldron
%A G. L. Kinzel
%T Computer-Aided Geometric Design of Legs for a Walking Vehicle
%J MAG24
%P 587-596
%K AI07
%A N. Nandhakumar
%A J. K. Aggarwal
%T The Artificial Intellgience Approach to Pattern Recognition -
A Perspective and an Overview
%J MAG25
%P 383-390
%K AI06
%A J. H. Justice
%A D. J. Hawkins
%A G. Wong
%T Multidimensional Attribute Analysis and Pattern Recognition for Seismic
Interpretation
%J MAG25
%P 391-408
%K AI06 AA03
%A P. L. Love
%A M. Simaan
%T Segmentation of a Seismic Section Using Image Processing and Artificial
Intelligence Techniques
%J MAG25
%P 409-420
%K AI06 AA03
%A K. Y. Huang
%A K. S. Fu
%T Syntactic Pattern Recognition for the Recognition of Bright Spots
%J MAG25
%P 421-428
%K AI06
%A K. Y. Huang
%A K. S. Fu
%A T. H. Sheen
%A S. W. Cheng
%T Image Processing of Seismograms: (A) Hough Transformation for the Detection
of Seismic Patterns (B) Thinning Processing in the Seismogram
%J MAG25
%P 429-440
%K AI06 AA03
%A R. F. Kubichek
%A E. A. Quincy
%T Statistical Modeling and Feature Selection for Seismic Pattern Recognition
%J MAG25
%P 441-448
%K AI06 AA03
%A R. F. Kubicheck
%A E. A. Quincy
%T Identification of Seismic Stratigraphic Traps Using Statistical Pattern
Recognition
%J MAG25
%P 449-458
%K AI06 AA03
%A H. H. Liu
%T A Rule-Based System for Automatic Seismic Determination
%J MAG25
%P 459-464
%K AI06 AA03
%A J. C. Hassab
%A C. H. Chen
%T On Constructing An Expert System for Contact Localization and Tracking
%J MAG25
%P 465-474
%K AI06 AA03 underwater acoustics
%A R. C. Hughes
%A J. N. Maksym
%T Acoustic Signal Interpretation: Reasoning with Nonspecific and Uncertain
Information
%J MAG25
%P 475-484
%K AI06 AA03 O04
%A C. H. Chen
%T Recognition of Underwater Transient Patterns
%J MAG25
%P 485-490
%K AI06
%A B. Bentz
%T Automatic Programming System for Signal Processing Applications
%J MAG25
%P 491
%K AA08 AI06
%A Shigemi Nagata
%A Tohio Matsura
%A Hidachi Endo
%T Automatic Recognition System for Logic Circuit Diagrams
%J Fujitsu Scientific and Technical Journal
%V 21
%N 4
%D AUG 1985
%P 408-420
%K AI06 AA04
%A Yishai A. Feldman
%T A Decidable Propositional Dynamic Logic with Explicit Probabilities
%J MAG26
%P 11-38
%K O04 AI11
%A David Harel
%A Dexter Kozen
%T A Programming Language for the Inductive Sets and Applications
%J MAG26
%P 118
%A R. I. Phelps
%T Artificial Intelligence-An Overview of Similarities with O. R.
%J MAG27
%P 13-20
%A M. J. Russell
%A R. K. Moore
%A M. J. Tomlinson
%T Dynamic Programming and Statistical Modeling in Automatic Speech Recognition
%J MAG27
%P 21-30
%K AI05
%A Michael Tso
%T Network Flow Models in Image Processing
%J MAG27
%P 31-34
%K AI06
%A Jon Warwick
%A Bob Phelps
%T An Application of Dynamic Programming to Pattern Recognition
%J MAG27
%P 35-40
%K AI06
%A T. J. Grant
%T Lessons for O. R. from A. I.: A Scheduling Case Study
%J MAG27
%P 41-48
%K AA05
%A V. G. Sigillito
%T Artificial Intelligence Research at the APL Research Center: An Overview
%J MAG28
%P 15-18
%A B. F. Kim
%A J. Bohandy
%A V. G. Sigillito
%T A Hierarchical Computer Vision Programming
%J MAG28
%P 19-22
%K AI06
%A B. I. Blum
%A V. G. Sigillito
%T An Expert system for Designing Information Systems
%J MAG28
%P 23-30
%K AI01 AA08
%A B. W. Hamill
%A R. L. Stewart
%T Modeling the Acquisition and Representation of Knowledge for Distributed
Tactical Decision Making
%J MAG28
%P 31-38
%K AA18 H03
%A Zuo L. Cao
%A Sung J. Oh
%A Ernest L. Hall
%T Dynamic Omnidirectional Vision for Mobile Robots
%J MAG29
%P 5-18
%K AI06 AI07
%A Wei-Chung Lin
%A Joseph B. Ross
%A Michelle Ziegler
%T Semiautomatic Calibration of Robot Manipulator for Visual Inspection Task
%J MAG29
%P 19-40
%K AI06 AI07
%A K. C. Gupta
%A G. J. Carlson
%T On Certain Aspects of the Zero Reference Position Method and its Application
to an Industrial Manipulator
%J MAG29
%P 41-58
%K AI07
%A T. H. Chiu
%A A. J. Koivo
%A R. Lewczyk
%T Experiments on Manipulator Gross Motion Using Self-tuning Controller and Visu
al
Information
%J MAG29
%P 59-70
%K AI07 AI06
%A A. A. Goldenberg
%A A. Bazerghi
%T Contribution to Synthesis of Manipulator Control
%J MAG29
%P 71-104
%K AI07
%A Shuhei Aida
%A Mitsuhiko Hasegawa
%A Taizo Ueda
%T Technology and Corporate Culture of Industrial Robots in Japan
%J MAG29
%P 105
%K AI07 GA01 O05
%A A. Micho
%T Developments in Expert Systems by M. J. Coombs
%J Proceedings of the IEEE
%V 74
%N 3
%D MAR 1986
%P 52
%K AT07 AI01
%A J. O. Eklundh
%A L. Kjelidahl
%T Computer Graphics and Computer Vision -- Some
Unifying and Discriminating Features
%J Computer and Graphics
%V 9
%N 4
%P 339-350
%D 1985
%K AI06
%A John Sandor
%T Octree Data Structures and Perspective Imagery
%J Computers and Graphics
%V 9
%N 4
%D 1985
%K AI06
%A Joseph Y. Halpern
%A Yoram Moses
%T Toward a Theory of Knowledge and Ignorance (Preliminary Report)
%B BOOK36
%P 459-476
%K AI16
%A Asher Peres
%T Reversible Logic and Quantum Computers
%J Physics Reviews A
%V 32
%D 1985
%N 6
%P 3266-3276
%A G. G. Ananiashviii
%A Z. I. Mundzhishvii
%A N. N. Bichashvii
%T Word Identification in a Natural Language in Interactive Systems
%J Soobshch. Akad. Nauk. Gruzin. SSR
%V 116
%D 1984
%N 3
%P 497-500
%K AI02
%X in Russian with English and Georgian Summaries
%A Dumitru Dumitrescu
%T Hierarchical Classification with Fuzzy Sets
%R Reprint 84-5
%I Univ. Babes-Bolyai
%C Cluj-Napoca
%D 1984
%K O04 O06
%X also appeared in Seminar of Models, Structures and Information Processing,
Cluj-Napoca
%A V. V. Krasnoproshin
%A V. A. Obratsov
%T Two-Level Models of Pattern Recognition Algorithms
%J Zh. Vychisl. Mat. i. Mat. Fiz
%V 25
%D 1985
%N 10
%P 1534-1546, 1582
%K AI06
%X (in Russian)
%A A. M. Slinko
%T Some Algebraic Operations Over Classification Algorithms and Their
Application
%J Zh. Vychisl. Mat. i. Mat. Fiz.
%V 25
%D 1985
%N 10
%P 1547-1546
%K O06
%X (in Russian)
%A Ronald R. Yager
%T Aggregating Evidence Using Quantified Statements
%J Inform. Sci
%V 36
%D 1985
%N 1-2
%P 179-206
%K O04
%A A. S. Dzyuba
%T Mean Deviation of the Frequency of Incorrect Pattern Recognition from
the Probability
%J Zh. Vychisl. Mat. i. Mat. Fiz.
%V 25
%D 1985
%N 10
%P 1547-1546
%K AI06
%X (in Russian)
%A D. M. Gabbay
%T Theoretical Foundations for Nonmonotonic Reasoning in Expert Systems
%B BOOK36
%P 439-457
%K AI15 AI16
%A Brian R. Gaines
%A Mildred L. G. Shaw
%T Logic, Algebra and Databases
%S Computers and Their Applications
%V 29
%I Ellis Horwood
%C Chichester
%K AT15 AA09
%X 294 pages ISBN 0-85312-709-3
%A H. Guggenheimer
%T Optical Flow for General Transformations
%S Polytechnic Notes on Artificial Intelligence
%V 1
%I Polytechnic Institute of New York, Division of Computer Science
%C Farmingdale, NY 1985
%K AI06
%X 19 pages
%A Abraham Lempel
%A Jacob Ziv
%T Compression of Two-Dimensional Images
%B BOOK37
%P 141-154
%K AI06
%A Can Isik
%A Alexander Meystel
%T Decision Making at a Level of a Hierarchical Control for Unmanned Robot
%B BOOK28
%K AI07
%A Marcin Banachiewicz
%T MSL: Robotic Sensor/Effector Programming Language
%B BOOK28
%K AI07
%A Michael K. Brown
%T On Ultrasonic Detection of Surface Features
%B BOOK28
%K AI07 AI06
%A B. A. Auld
%A A. J. Bahr
%T A Novel Multifunctional Robot Sensor
%B BOOK28
%K AI07
%A P. P. Lin
%A P. Datseris
%T Development of a Position and Force Sensor for Robotic Applications
%B BOOK28
%K AI07
%A F. W. Sinden
%A R. A. Bole
%T A Planar Capactive Force Sensor with Six Degrees of Freedom
%B BOOK28
%K AI07
%A William I. Bullers
%T Logic Programming for Manufacturing System Specification
%B BOOK28
%K AI10 AA26
%A Rodger Cliff
%T Meta-Architectural Issues of the ALV: Developing a Paradigm for Intelligent
System Engineering
%B BOOK28
%K AI07 AA19
%A David Payton
%T A Reflexive Control Approach to Autonomous Vehicle Navigation
%B BOOK28
%K AI07 AA19
%A Daryl T. Lawton
%A Tod Levitt
%A Jay Glicksman
%T Terrain Modeling and Recognition for an Autonomous Lank Vehicle (sic)
%B BOOK28
%K AI07 AA19 AI06
%A Don Shapiro
%A Ted Linden
%A Jay Glicksman
%A Daryl Lawton
%T Object Based Planning for an Autonomous Land Vehicle
%B BOOK28
%K AI07 AA19 AI09
%A W. W. W. Cimino
%A G. R. Penrock
%T Workspace of a Six Revolute Decoupled Robot Manipulator
%B BOOK28
%K AI07
%A Bayliss McInnis
%A Chen-Kang Liu
%T Coordinate Frames, Transformations and Inverse Functions for Joint Variables
in Robotics: A Tutorial Based Upon Classical Concepts
%B BOOK28
%K AI07
%A Dieter W. Wloka
%T Simulation of Robots Using CAD-System Robsim
%B BOOK28
%K AI07
%A Chi-hau Wau
%A Herando Valenco
%T Trajectory Feasibility Study Based on Cartesian Workspace Geometry for
Robot Manipulators
%B BOOK28
%K AI07
%A J. Korein
%A R. Taylor
%A G. Maier
%A L. Durfee
%T A Configurable Environment for Motion Programming and Control
%B BOOK28
%K AI07
%A Richard Paul
%A Hang Zhang
%T A Force and Motion Server for Distributed Robot Control
%B BOOK28
%K AI07 H03
%A D. Siegel
%A S. Narasimhan
%A K. Biggers
%A G. Gerpheide
%T Implementation of Control Methodologies on the Computational
Architecture for the Utah/MIT Hand
%B BOOK28
%K AI07
%A Robert D. Gaglianello
%A Howard P. Katseff
%T A Distributed Computing Environment for Robotics
%B BOOK28
%K AI07
------------------------------
End of AIList Digest
********************
∂10-Jun-86 0547 LAWS@SRI-AI.ARPA AIList Digest V4 #146
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Jun 86 05:46:56 PDT
Date: Mon 9 Jun 1986 21:40-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #146
To: AIList@SRI-AI
AIList Digest Tuesday, 10 Jun 1986 Volume 4 : Issue 146
Today's Topics:
Literature - Bibliography #3
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Bibliography #3
%T Large-Dictionary, On-Line Recognition of Spoken Words
%I Helsinki Univ. of Technology
%D 1983
%R PB84-214246/CAO
%K AI02
%X NTIS price, PC$11.50/MF$6.50
%T LispKit Manual. Volume 1
%I Oxford University
%D 1983
%R PB84-204874/CAO
%K T01
%X NTIS price PC $17.50/MF $17.50
%T LispKit Manual. Volume 2 (Sources)
%I Oxford University
%D 1983
%K T01
%R PB84-204882/CAO
%X NTIS price PC$17.50/MF $17.50
%T Verification of Secure Systems
%I Newcastle upon Tyne Univ.
%D 1982
%R PB84-138718/CAO
%K AA08
%X NTIS price PC$13.50/MF$13.50
%T Designing Automated Systems -- Need Skill Be Lost
%I University of Manchester Institute of Science and Technology
%D AUG 1983
%R PB84-232297/CAO
%K O05
%X NTIS price PC $9.50/MF $9.50
%T Robot Manipulators: Program Control 1975- SEPT 1984
%I NTIS
%R PB 84-875384/CAO
%K AI07 AT09
%X NTIS prices PC $40.00/MF$40.00 contains over 300 references extracted
from the INSPEC database
%T Robotic Technology: An Assessment and Forecast
%I DHR, Inc.
%C Washington, DC
%D JUL 1984
%R AD-A146 672/CAO
%K AI07
%X NTIS price PC $17.50 MF $4.50
%T Robotic Safety
%I Sandia National Labs
%C Alburquerque, NM
%D MAY 1984
%R DE84-012237/CAO
%K AI07
%X NTIS prices PC $7/MF$4.50
%A Chanderjit Bajaj
%T An Efficient Parallel Solution for
Euclidean Shortest Paths in Three Dimensions
%B BOOK28
%K O06
%A P. Morasso
%A F. A. Mussa-Ivaldi
%T The Role of Physical Constraints in Natural and Artificial Manipulation
%B BOOK28
%K AI07
%A S. Dubowsky
%A M. A. Norris
%A Z. Shiller
%T Time Optimal Trajectory Planning for Robotic Manipulators with Obstacle
Avoidance: A CAD Approach
%B BOOK28
%K AI07 AI09
%A E. Dombre
%A A. Fournier
%A C. Quaro
%A P. Borrel
%T Trends in CAD/CAM Systems for Robotics
%B BOOK28
%K AI07
%A A. L. Pai
%A K. Lee
%A K. Palmer
%A D. G. Selvidge
%T Automated Visual Inspection of Aircraft Engine Combustor Assemblies
%B BOOK28
%K AI06 AA26
%A Thomas M. Kisko
%A Eginhard J. Muth
%T Multiple-Stage Assembly of Personal Computers in Robotic Workcells
with Vision Support
%B BOOK28
%K AI07 AI06 AA26
%A E. B. Silverman
%A R. K. Simmons
%A F. E. Gelhaus
%A J. Lewis
%T Surveyor: A Remotely Operated Mobile Surveillance System
%B BOOK28
%K AI07 AI06 AA19 AA04
%A Edward N. Scheibel
%A Henry R. Busby
%A Kenneth J. Waldron
%T Design of a Mechanical Proximity Sensor
%B BOOK28
%K AI07
%A Corinne C. Ruokangas
%A Michael S. Black
%T Integration of Multiple Sensors to Provide Flexible Control Strategies
%B BOOK28
%K AI07 AI06
%A Keishi Hanahara
%A Tsugito Maruyama
%A Takashi Uchiyama
%T High-Speed Hough Transform Processor and its Applications to Automatic
Inspection and Measurement
%B BOOK28
%K AI06
%A H. D. Cheng
%T VLSI Architecture for Dynamic Time-Warp Recognition of Hand-Written Symbols
%B BOOK28
%K AI06
%A E. Hu
%A S. Mangiaracina
%A M. Peters
%A A. Harkin
%A S. Hackword
%A G. Beni
%T Inference in Intelligent Machines: Applications to a Thermal Evaporator
%B BOOK28
%K AA05 AI01
%A Zixing Cai
%A K. S. Fu
%T Robot Planning Expert Systems
%B BOOK28
%K AI07 AI01
%A Zixing Cai
%T Some Research Works on Expert Systems in AI Course at Purdue
%B BOOK28
%K AI01 AT18
%A Jean Patrick Tsang
%A Yves Lagoude
%T Representation and Manipulation of Process Plans in Generic Expert Systems
%B BOOK28
%K AI01 AA05 AI09
%A Mark Thomas
%T ALV Reasoning Systems
%B BOOK28
%K AA19
%A David Morgenthaler
%T ALV Perception System
%B BOOK28
%K AA19 AI06
%A Jim Lowrie
%A R. Douglass
%T Autonomous Road Following
%B BOOK28
%K AI07 AA19 AI06
%A T. Kanade
%T Panel Discussion: Possibilities in ALV Research
%B BOOK28
%K AA19
%A Joseph Y. Halpern
%T Reasoning About Knowledge: An Overview
%B BOOK38
%K AA16
%T Theoretical Aspects of Reasoning About Knowledge
%A Joseph Y. Halpert
%I Morgan Kaufman Publishers, Inc.
%C Palo Alto, CA
%D 1986
%K AA16 AT15
%X ISBN 0-934613-0404 $18.95
%A Ryszard S. Michalski
%A Jaime G. Carbonell
%A Tom M. Mitchell
%T Machine Learning : An Artificial Intelligence Approach, Volume II
%I Morgan Kaufman Publishers, Inc.
%C Palo Alto, CA
%D 1986
%K AI04 AT15
%X ISBN 0-934613-00-1 $39.95 738 pages
%A Ronald J. Brachman
%A Hector J. Levesque
%T Readings in Knowledge Representation
%I Morgan Kaufman Publishers, Inc.
%C Palo Alto, CA
%D 1986
%K AA16 AT15
%X ISBN 0-934613-01-X $26.95 571 pages
%A Perry L. Miller
%T A Critiquing Approach to Expert Computer Advice: Attending
%I Morgan Kaufman Publishers, Inc.
%C Palo Alto, CA
%D 1984
%K AI01 AA01 anesthesiology O01 AT15 O01
%X ISBN 0-273-08665-0 $19.95 112 pages
%A Richard Korf
%T Learning to Solve Problems by Searching for Macro-Operators
%I Morgan Kaufman Publishers, Inc.
%C Palo Alto, CA
%D 1985
%K AI09 AI04 AT15
%X ISBN 0-273-08690-1 $22.95
%A Pual R. Cohen
%T Heuristic Reasoning About Uncertainty An Artificial Intelligence
Approach
%I Morgan Kaufman Publishers, Inc.
%C Palo Alto, CA
%D 1985
%K O06 AT15
%X ISBN 0-273-08667-7 $22.95
%A Andrew J. Palay
%T Searching with Probabilities
%I Morgan Kaufman Publishers, Inc.
%C Palo Alto, CA
%D 1985
%K AT15 chess AI03 AA17 O04
%X ISBN 0-273-08664-2 $22.95 192 pages
%A Yuichi Ohta
%T Knowledge-Based Interpretation of Outdoor Natural Color Scenes
%I Morgan Kaufman Publishers, Inc.
%C Palo Alto, CA
%D 1985
%K AT15 AI06
%X ISBN 0-273-08673-1 $19.95
%A Susanne P. Graf
%A J. Sifakis
%T From Synchronization Tree Logic to Acceptance Model Logic
%B BOOK35
%P 128-142
%K AA08
%A A Sam Kamin
%T A FASE specification of FP
%B BOOK35
%P 143-152
%K AA08
%A R. Koymans
%A R. K. Shyamasundar
%A W. P. de Roever
%A R. Gerth
%A S. Arun-Kumar
%T Compositional Semantics for Real-Time Distributed
computing
%B BOOK35
%P 167-189
%K AA08
%A F. Kroger
%T On Temporal Program Verification Rules
%J RAIRO Inform. Theor
%V 19
%D 1995
%N 3
%P 261-280
%K AA08
%A J. L. Lassez
%A Michael John Maher
%T Optimal Fixed Points of Logic Programs
%J Theoretical Computer Science
%V 39
%D 1985
%N 1
%P 15-25
%K AI10
%A Daniel Leivant
%T Partial-Correctness Theoreis as First-Order Theories
%B BOOK35
%K AA08 AI11
%P 190-195
%A Albert R. Meyer
%A Mitchell Wand
%T Continuation Semantics in Typed Lambda-Calculi
%B BOOK35
%K AA08
%A B. Mishra
%A E. Clarke
%T Hierarchical Verification of Asynchronous Circuits Using
Temporal Logic
%B BOOK35
%K AA04
%A Eugene C. Freuder
%T A Sufficient Condition for Backtrack-Bounded Search
%J JACM
%V 32
%N 4
%D 1985
%P 755-761
%K AI03
%A Irina Bercovici
%T Unsolvable Terms in Typed Lambda Calculus with Fixed
Point Operators
%B BOOK35
%P 16-22
%K AA08
%A Val Breazu-Tannen
%A Albert R. Meyer
%T Lambda Calculus with Constrained Types
%B BOOK35
%P 23-40
%K AA08
%A Stephen D. Brookes
%T An Axiomatic treatment of a Parallel Programming
Language
%B BOOK35
%P 41-60
%K AA08
%A A A. Ya Dikovskii
%T Solution in Linear Time of Algorithmic Problems
Connected with Synthesis of Nonlooping Programs
%J Programmirovanie
%V 1985
%N 3
%P 38-49
%K AA08
%X in Russian
%A E. Allen Emerson
%T Automata, Tableaux and Temporal Logics
%B BOOK35
%P 79-88
%K AA08
%A Nissim Francez
%A Orna Grumberg
%A Shmuel Katz
%A Amir Pnueli
%T Proving Termination of Prolog Programs
%B BOOK35
%P 89-105
%K AA08 O02
%A J. Padget
%T Current Developments in Lisp
%B BOOK39
%P 45-57
%K T01
%A A. W. Biermann
%T Algorithmic Methods in Automatic Programming
%B BOOK39
%P 124-135
%K AA08
%A G. Kreisel
%T Proof Theory and the Synthesis of Progrmas - Potentials and Limitations
%B BOOK39
%P 136-150
%K AA08
%A T. Coquand
%A G. Huet
%T Constructions - A Higher Order Proof System for Mechanizing Mathematics
%B BOOK39
%P 151-184
%K AA13
%A C. A. R. Hoare
%T The Mathematics of Programming
%B BOOK40
%P 1-18
%K AA08
%A G. Agha
%A C. Hewitt
%T Concurrent Programming Using Actors - Exploiting Large-Scale Parallelism
%B BOOK40
%P 19-40
%K H03
%A C. Ghezzi
%A D. Mandrioli
%A A. Tecchio
%T Program Simplification via Symbolic Interpretation
%B BOOK40
%P 116-128
%K AA08
%A J. Hsiang
%A M. Srivas
%T PROLOG Based Inductive Theorem Proving
%B BOOK40
%P 129-149
%K T02 AI11
%A J. Veenstra
%A N. Ahuja
%T Deriving Object Octree from Images
%B BOOK40
%P 196-211
%K AI06
%A Z. Manna
%A R. Walding
%T Deduction with Relation Matching
%B BOOK40
%P 212-224
%K AI14 AI11
%A F. V. Jensen
%A K. G. Larsen
%T Recursively Defined Domains and their Induction Principles
%B BOOK40
%P 225-245
%K AA08
%A G. Venkatesh
%T A Decision Method for Temporal Logic Based on Resolution
%B BOOK40
%P 272-289
%K AI11 AI14
%A A. Chandra
%T Who Needs to Verify Programs if you Can Test Them
%B BOOK40
%P 346
%K AA08
%A V. A. Saraswat
%T Partial Correctness Semantics for CP [Down-and]
%B BOOK40
%P 347-368
%K AA08
%A E. W. Stark
%T A Proof Technique for Rely Guarantee Properties
%B BOOK40
%P 369-391
%K AA08 AI11
%A G. Winskel
%T A Complete Proof System for SCCS with Modal Assertions
%B BOOK40
%P 392-410
%K AA08
%A R. D. Schraft
%A J. Schuler
%T Robot Applications in FMS
%B Flexible Manufacturing Systems
%E H. J. Warnecke
%E R. Steinhilper
%I Springer-Verlag
%C Berlin
%D 1985
%A A. A. Goldenberg
%A A. Bazerghi
%T Synthesis of Robot Control for Assembly Processes
%J Mechanism and Machine Theory
%V 21
%N 1
%D 1986
%P 43-62
%K AI07 AA26
%A H. J. Warnecke
%A B. Frankenhauser
%T Assembly of Flexible Parts with Industrial Robots
%J MAG30
%P 8-11
%K AI07 AA26
%A P. Nicolaisen
%T Improved Worker Safety in the Programming of Industrial Robots
%J MAG30
%P 12-14
%K AI07
%A K. H. Wurst
%A M. Bauder
%T Control Structures and Information Exchange for Linked Industrial
Robots
%J MAG30
%P 15-17
%K AI07 H03
%A Jeffrey Kerr
%A Bernard Roth
%T Analysis of Multifingered Hands
%J MAG31
%P 3-17
%K AI07
%A Mark L. Hornick
%A Bahram Ravani
%T Computer Aided Off-Line Planning and Programming of Robot Motion
%J MAG31
%P 18-31
%K AI07
%A John Hopcroft
%A Gordon Wilfgong
%T Motions of Objects in Contact
%J MAG31
%P 32-46
%K AI07
%A Katsutoshi Kuribayashi
%T A New Actuator of a Joint Mechanism Using TiNi Alloy Wire
%J MAG31
%P 47-58
%K AI07
%A Jorge Angeles
%T Iterative Kinematic Inversion of General Five-Axis Robot Manipulators
%J MAG31
%P 59-70
%K AI07
%A James P. Trevelyan
%a Peter D. Kovesi
%A Michael Ong
%A David Elford
%T ET: A Wrist Mechanism without Singular Positions
%J MAG31
%P 71
%K AI07
%A K. G. Kempf
%T Manufacturing and Artificial Intelligence
%B BOOK41
%P 1-20
%K AA26
%A P. Raulefs
%T Knowledge Processing Expert Systems
%B BOOK41
%P 21-32
%K AI01
%A W. Wahlster
%T Cooperative Access Systems
%B BOOK41
%P 33-46
%K AI16
%A C. W. Burckhardt
%T The Next Generation of Robots - Increased Flexibility Through the
Use of Sensors
%B BOOK41
%P 47-50
%K AI07
%A B. Neumann
%T Vision Systems - State of the Art and Prospects
%B BOOK41
%P 51-62
%K AI06
%A G. Albers
%T Expert Systems and Knowledge Engineering - Robotics and
Intelligent Interfaces - Summary of Discussions
%B BOOK41
%P 63-66
%K AI01 AI07
%A B. Rees
%T Artificial Intelligence in a Large-Scale Enterprise - the
Experience of Digital Equipment Corrporation
%B BOOK41
%P 67-76
%A D. Sagalowicz
%T Expert Systems in Service Sectors - Use of Expert Systems in 6
Sample Cases
%B BOOK41
%P 77-80
%K AA06 AI01
%A H. Thompson
%T Office Automation - A Field for Applied Artificial Intelligence
%B BOOK41
%P 81-86
%K AA06
%A C. J. Jenny
%T Requirements on Expert Systems as Seen by an Insurance Company
%B BOOK41
%P 87-96
%K AI01 AA06
%A G. Eibl
%T Current Work on Expert Systems and Natural Language Processing
at Siemens
%B BOOK41
%P 97-106
%K AI01 AI02
%A W. Sieber
%T Computer Assisted Synthesis - a Project of the Chemical Industry
%B BOOK41
%P 107-110
%K AA16 AA05
%A R. L. Langley
%T A Case Study of the Dipmeter Advisor Development
%B BOOK41
%P 111-118
%K AA03 AI01
%A S. E. Savory
%T FF - A Nixdorf Expert System for Fault-Finding and Fault
Finding - An Outline Description
%B BOOK41
%P 119-128
%K AI01 AA21
%A J. F. Hery
%T A Prototype Expert System in PWR Power Plant Conducting
%B BOOK41
%P 129-134
%K AA05
%A H. Marchand
%T Knowledge Engineering in CAE - First Industrial Experiences
%B BOOK41
%P 135-142
%K AA05
%A J. C. Latombe
%T Advanced Information Processing in Robotics
%B BOOK41
%P 143-160
%K AA05
%A D. C. Schwartz
%T The Lisp Machine Architecture
%B BOOK41
%P 161-168
%K H02
%A K. Wiig
%T Market Trends in Artificial Intelligence in the United
States and Japan
%B BOOK41
%P 169-184
%K GA01 GA02 AT04
%A A. W. Pearson
%T Speculations on the Future of Knowledge Engineering in Europe I,II
%B BOOK41
%P 185-188
%K GA03
%A H. W. Husch
%A E. Staudt
%T The Influence of Artificial Intelligence on Organizational Structure and Rati
onalization
%B BOOK41
%P 189-200
%K O05
%A T. Bernold
%T Possibilities and Limitations of Artificial Intelligence
%B BOOK41
%P 205-208
%K AI16
%A S. A. Cerri
%T Problems of the Infrastructure - the Bottlenecks in Research and Training
%B BOOK41
%P 209-212
%K AT19
%A B. Oakley
%T Research Policy of Administrations - Great Britain (ALVEY)
%B BOOK41
%P 213-218
%K AT19 GA03
%A H. Gallaire
%A W. Bibel
%A B. Oakley
%T Cooperation Between University, Government and Industry
%B BOOK41
%P 217-220
%K AT10
%A M. Boden
%T Artificial Intelligence and Natural Man
%B BOOK41
%P 221
%K AI16 O05
%A Benjamin W. Wah
%A Guo-Jie Li
%T Tutorial: Computers for Artificial Intelligence Applications
%I IEEE Computer Society
%D MAY 1986
%K AT15
%X list price $49.00 member price $36.00 order no CZ706
ISBN 0-8186-0706-8 648 pages
%A A C. S. George Lee
%A R. C. Gonzalez
%A K. S. Fu
%T Tutorial: Robotics (Second Edition)
%I IEEE Computer Society
%D APRIL 1986
%K AI07 AT15
%X Order NO. CZ658, ISBN 0-8186-0658-4 list price $70.00
member price $39.00 744 pages
%A Rama Chellappa
%A Alexander A. Sawchuk
%T Tutorial: Digital Image Processing and Analysis
Volume 2: Digital Image Analysis
%I IEEE Computer Society
%D DEC 1985
%K AI06 AT15
%X ISBN 0-8186-0666-5 Order No. CZ666 list price
$66.00 member price $36.00, 680 pages
%A Rama Chellappa
%A Alexander A. Sawchuk
%T Tutorial: Digital Image Processing and Analysis
Volume I: Digital Image Processing
%I IEEE Computer Society
%D JUN 1985
%K AI06 AT15
%X ISBN 0-8186-0665-7 order No. CZ665
list price $66.00 member price $36.00 736 pages
%A J. Gauvin
%T Robots 10 Stresses Integration
%J MAG32
%P 53-58
%K AI07
%A J. P. Ziskovsky
%T Robots - A Piece of the Automation Pie
%J MAG32
%P 14
%K AT12 AA26 AI07
%A N. S. Rajaram
%T Artificial Intelligence: Its Impact on the Process Industries
%J MAG33
%P 33-44
%K AA20 AA16
%A G. Allmendinger
%T AI: Can Performance Match the Promise?
%J MAG33
%P 45-50
%K AA16
%A R. S. Shirley
%A D. A. Fortin
%T Developing an Expert System for Process Fault Detection
and Analysis
%J MAG33
%P 51-56
%K AA05 AA20 AA21 AI01
%A A. E. Nisenfeld
%A M. A. Turk
%T Batch Reactor Control: Could an Expert Advisor Help?
%J MAG33
%P 57
%K AA05 AA20 AI01
%A G. Spur
%A G. Seliger
%A T. V. Diep
%T Sensor Based Assembly System
%J MAG34
%P 3-8
%K AI07 AA26
%X (in German)
%A U. Vongunten
%A C. W. Burckhardt
%T Sensors for Robots - Searching, Touching, Grasping
%J MAG34
%P 9-16
%K AI07 AI06
%X (in German)
%A G. Zimmer
%A B. Hosticka
%T Integration of Sensors Using VLSI Technologies
%J MAG34
%P 17-26
%K AI07
%X (in German)
%A W. Weber
%A H. Britwieser
%T Control of Servomanipulator by the Inverse Model
%J MAG34
%P 27-36
%K AI07
%X (in German)
%A U. Ahrens
%A G. Drunk
%A A. Langen
%T Sensor Interfaces of Robot Control Systems
%J MAG34
%P 37-46
%K AI07
%X (in German)
%A G. Pritschow
%A G. Gruhler
%T Sensors for Geometry and Processing of Sensor Data for Automatical Robot
Programming
%J MAG34
%P 47-54
%K AI07 AI06
%X (in German)
%A T. J. Doll
%T Non-Tactile Sensors for Robots and Planning of Sensor Application
%J MAG34
%P 55
%K AI07 AI06
%X (in German)
%A M. C. Wanner
%T Industrial Robots in Japan in 1984
%J MAG34
%P 54
%K AI07 GA01
%X (in German)
%T VAl-II, a New Robot Programming Language
%J MAG34
%P 63
%K AI07
%X (in German)
%A L. A. Walils
%A A. Bendell
%T Human Factors and Sampling Variation in Graphical Identification and
Estimation for the Weibull Distribution
%J Reliability Engineering
%V 13
%N 3
%D 1985
%K AI08
%A C. A. J. Braganca
%A P. Sholl
%T Val-II, A Language for Hierarchical Control of a Robot-Based Automated
Factory
%J MAG35
%P 265-272
%K AI07 AA26
%A P. T. Rayson
%T A Review of Expert Systems Principles and Their Role in Manufacturing
Systems
%J MAG35
%P 279
%K AI07 AA26 AT08
%A W. E. Red
%A Hung-Viet Truong-Cao
%T Configuration Maps for Robot Path Planning in Two Dimensions
%J MAG36
%P 292-298
%K AI07 AI09
%A O. Z. Maimon
%A S. Y. Nof
%T Coordination of Robots Sharing Assembly Tasks
%J MAG36
%P 299-307
%K AI07 AA26
%A S. N. Singh
%A A. A. Schy
%T Robust Trajectory Following Control of Robotic Systems
%J MAG36
%P 308-315
%K AI07
%A A. J. Kolvo
%T Self-Tuning Manipulator Control in Cartesian Base Coordinate Systems
%J MAG36
%P 316-323
%K AI07
%A G. W. Kohler
%T Power Manipulators
%J MAG37
%P 195-202
%K AI07
%A U. Ahrens
%T Possibilities and Problems in Application of Airborne Ultrasonic
Sensors in Assembly Systems and Handling Systems
%J MAG37
%P 203-210
%K AI06 AI07 AA26
%A D. Wloka
%A K. Blug
%T Simulation of Robot Dynamics with the Method of Kane
%J MAG37
%P 211-216
%K AI07
%A M. C. Wanner
%A K. Baumeister
%A G. W. Kohler
%A H. Walze
%T Robotics in Civil Engineering
%J MAG37
%P 227-236
%K AI07 AA05
%A C. Blume
%A B. Heck
%T Analysis of Inherent Concurrency in High Level Programming Languages
for Industrial Robots
%J MAG37
%P 237-230
%K AI07 H03
%A P. Nitezki
%T Experience with SPIDER- A Portable Subroutine Library for Image Processing
%J MAG37
%P 231-233
%K AI06
%A R. Dillmann
%A M. C. Wanner
%T The Esprit Project in the Area of Robotics
%J MAG37
%P 234
%K GA03 AI07
%A Robert L. Stewart
%A Douglas R. Ousborne
%T An Experimental Expert Weapon Detection System
%J Naval Engineers Journal
%V 98
%N 3
%D MAY 1986
%P 24-34
%K AA18 AI01
%A M. Raghaven
%A S. I. Mehta
%A U. Pathie
%A K. V. Vaishampayan
%T Mechanical Design of an Industrial Robot
%J Indian Journal of Technology
%V 24
%N 3
%D MAR 1986
%P 149-152
%K AI07
%A Mark Wynott
%T Close-Up: Artificial Intelligence Provides Real-Time Control of Material
Handling Process
%J Industrial Engineering
%V 18
%N 4
%D APR 1986
%P 34-46
%K AA26 AA05 O03
%A M. C. Golumbic
%A M. Markovich
%A S. Tsur
%A U. J. Schild
%T A Knowledge Based Expert System for Student Advising
%J IEEE Transactions on Education
%V 29
%N 2
%D MAY 1986
%P 120-124
%K AA06 AI01
%A A. B. Ritter
%A W. Braun
%A A. Stein
%A W. Duran
%T Visualization of the Coronary Microcirculation Using Digital Image
Processing
%J Computers in Biology and Medicine
%V 15
%N 6
%D 1985
%P 361-375
%K AI06 AA10
%A D. Umphress
%A G. Williams
%T Identity Verification Through Keyboard Characteristics
%J MAG38
%P 263-274
%K AI06
%A R. J. Baron
%T Visual Memories and Mental Images
%J MAG38
%P 275-312
%K AI08
%A B. A. Julstrom
%A R. J. Baron
%T A Model of Mental Imagery
%J MAG38
%P 313
%K AI08
%A J. Bajon
%A M. Cattoen
%A S. D. Kim
%T A Concavity Characterization Method for Digital Objects
%J Signal Processing
%V 9
%N 3
%D OCT 1985
%K AI06
------------------------------
End of AIList Digest
********************
∂10-Jun-86 0910 LAWS@SRI-AI.ARPA AIList Digest V4 #147
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 10 Jun 86 09:10:30 PDT
Date: Mon 9 Jun 1986 21:46-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #147
To: AIList@SRI-AI
AIList Digest Tuesday, 10 Jun 1986 Volume 4 : Issue 147
Today's Topics:
Literature - Bibliography #4
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Bibliography #4
%A V. M. Kushkov
%T Improving the Reliability of Flexible Manufacturing Systems
%J MAG19
%K AA26
%A V. N. Abrarov
%T Investigation of the Limiting Characteristics of Electrostatic
Gripping Devices in Robot Technology
%J MAG19
%K AI07
%A D. R. Kritskii
%A V. Ya Naimanov
%T A Simulation Model for Assessing the Positioning Time of a Robot
%J MAG19
%K AI07
%P 45-49
%A V. G. Ostapchuk
%T The Use of Image Recognition Systems for Automatic Workpiece Gauging
%J MAG19
%K AI07
%P 40-41
%A Kit Grindley
%T Applying Expert Principles to Computer Systems Development
%J MAG20
%K AI01 AA08
%P 10-14
%A Russell Jones
%T European Expert Systems Projects for Systems Developers
%J MAG20
%K AA08 AI01 GA03
%P 15-17
%A Sol J. Greenspan
%A Alexander Borgida
%A John Mylopoulos
%T A Requirements Modeling Language and its Logic
%J MAG21
%P 9-24
%A Jose Fiadeiro
%A Amilcar Sernadas
%T The INFOLOG Linear Tense Propositional Logic of Events and
Transactions
%J MAG21
%P 61-86
%A S. E. Fahlman
%T Parallel Processing in Artificial Intelligence
%J Parallel Computing
%V 2
%N 3
%D DEC 1985
%P 283-286
%K H03
%A F. Neilson
%T Abstract Interpretation of Denotational Definitions (A Survey)
%B BOOK29
%P 1-20
%K AA08
%A E. A. Emerson
%A C. L. Lei
%T Temporal Reasoning Under Generalized Fairness Constraints (Extended
Abstract)
%B BOOK29
%K AA08
%P 21-36
%A M. A. N. Abdallah
%T Ions and Local Definitions in Logic Programming
%B BOOK29
%P 73-86
%K AI10
%A Adrian Walker
%T Knowledge Systems: Principle and Practice
%B MAG22
%P 2-13
%K AT08
%A R. L. Ennis
%A J. H. Griesmer
%A S. J. Hong
%A M. Karnaugh
%A J. K. Kastner
%A D. A. Klein
%A K. R. Milliken
%A M. I. Schor
%A H. M. Van Woerkom
%T A Continuous Real-Time Expert System for Computer Operations
%J MAG22
%P 14-28
%K AA08 O03
%A P. Hirsch
%A W. Katake
%A M. Meier
%A S. Snyder
%A R. Stillman
%T Interfaces for Knowledge-Base Builders' Control Knowledge
and Application-Specific Procedure
%J MAG22
%P 29-38
%A Franz Guenthner
%A Hubert Lehmann
%A Wolfgang Schonfel
%T A Theory for the Representation of Knowledge
%J MAG22
%P 39-56
%A John F. Sowa
%A Eileen C. Way
%T Implementing a Semantic Interpreter Using Conceptual Graphs
%J MAG22
%P 57-69
%A Jean Fargues
%A Marie-Claude Landau
%A Anne Dugourd
%A Laurent Catach
%T Conceptual Graphs for Semantics and Knowledge Processing
%J MAG22
%P 70-79
%A Ghica van Emde Boas
%A Peter van Emde Boas
%T Storing and Evaluating Horn-Caluse Rules in a Relational
Database
%J MAG22
%P 80-92
%K AA09 AI10
%A William F. Eddy
%A Gabriel P. Pei
%T Structures of Rule-Based Belief Functions
%J MAG22
%P 93-101
%K AI01
%A H. Diel
%A N. Lenz
%A H. M. Welsch
%T An Experimental Computer Architecture Supporting Expert
Systems and Logic Programming
%J MAG22
%P 102
%K AI01 AI10
%A T. Williams
%T Image Processors Allow Hardware Reconfiguration to Match
Applications
%B MAG23
%P 46-54
%K AI06
%A W. E. Suydam
%T AI Becomes the Soul of the New Machines
%J MAG23
%P 55-62
%A D. A. Gewirtz
%T Artificial Intelligence As a System Component
%J MAG23
%P 63-64
%A A. D. Jacobson
%T The Challenges Facing Expert Systems Technology
%J MAG23
%P 65-67
%A R. Moore
%T AI Must Cater to Nonexperts
%J MAG23
%P 68-76
%K O01
%A P. Haley
%A C. Williams
%T Expert System Development Requires Knowledge Engineering
%J MAG23
%P 83-90
%K AI01
%A R. D. Schraft
%A J. Schuler
%T Robot Applications in FMS
%B Flexible Manufacturing Systems: International Trends
in Manufacturing Technology
%E H. J. Warnecke
%E R. Steinhilper
%I Springer Verlag
%K AA26 AI07
%X $54.00 ISBN 0-903608-95-2
%A B. Buchberger
%T Basic Features and Development of the Critical Pair Completion Procedure
%B BOOK30
%K AI14
%P 1-45
%A H. T. Zhang
%A J. L. Remy
%T Contextual Rewriting
%B BOOK30
%K AI14
%P 46-62
%A R. V. Book
%T Thue Systems as Rewriting Systems
%B BOOK30
%K AI14
%P 63-94
%A F. Otto
%T Deciding Algebraic Properties of Monoids Presented by Finite Church-Rosser
Thue Systems
%B BOOK30
%K AI14
%P 95-106
%A S. S. Cosmadakis
%A P. C. Kanellakis
%T 2 Applications of Equational Theories to Database Theory
%B BOOK30
%K AI14 AA09 AI11
%P 107-123
%A N. D. Jones
%A P. Sestoft
%A H. Sondergaard
%T An Experiment in Partial Evaluation - The Generation of a Compiler Generator
%B BOOK30
%K AA08
%P 124-140
%A P. Rety
%A C. Kirchner
%A H. Kirchner
%A P. Lescanne
%T Narrower- A New Algorithm for Unification and its Application to Logic
Programming
%B BOOK30
%K AI10
%P 141-157
%A H. Aitkaci
%T Solving Type Equations by Graph Rewriting
%B BOOK30
%K AI14 AA08
%P 158-179
%A N. Dershowitz
%T Termination
%B BOOK30
%K AI14
%P 180-224
%A M. Rusinowitch
%T Path of Subterms Ordering and Recursive Decomposition Ordering
Revisited
%B BOOK30
%K AI14
%P 225-240
%A L. Bachmair
%A D. A. Plaisted
%T Associative Path Orderings
%B BOOK30
%K AI14
%P 241-254
%A D. Detlefs
%A R. Forgaard
%T A Procedure for Automatically Proving the Termination of a Set of Rewrite
Rules
%B BOOK30
%K AI14 AI11
%P 255-270
%A C. Choppy
%A C. Johnen
%T Petrireve
Proving Petri Net Properties with Rewriting Systems
%B BOOK30
%K AI14 AI11 AA08
%P 271-286
%A S. Porat
%A N. Francez
%T Fairness in Term Rewriting Systems
%B BOOK30
%K AI14
%P 287-300
%A J. Hsiang
%T Two Results in Term Rewriting Theorem Proving
%B BOOK30
%K AI14 AI11
%P 301-324
%A L. Fribourg
%T Handling Function Definitions Through Innermost Superposition and
Rewriting
%B BOOK30
%K AI14 AI11 AA08
%P 325-344
%A A. Kandrirody
%A D. Kapur
%A P. Narendran
%T An Ideal-Theoretic Approach to Word Problems and Unification Problems over
Finitely Presented Commutative Algebras
%B BOOK30
%K AI14 AI11
%P 345-364
%A K. Yelick
%T Combining Unification Algorithms for Confined Regular Equational Theories
%B BOOK30
%K AI14 AI11
%P 365-380
%A A. Fortenbacher
%T An Algebraic Approach to Unification Under Associativity and Commutativity
%B BOOK30
%K AI14 AI11
%P 381-397
%A S. Arnborg
%A E. Tiden
%T Unification Problems with One-Sided Distributivity
%B BOOK30
%K AI14 AI11
%P 398-406
%A P. W. Purdom
%A C. A. Brown
%T Fast Many-to-One Matching Algorithms
%B BOOK30
%K AI14 AI11
%P 407-416
%A D. Benanav
%A D. Kapur
%A P. Narendran
%T Complexity of Matching problems
%B BOOK30
%K AI14 AI11
%P 417-429
%A M. Zaionc
%T The Set of Unifiers in Typed Lambda-Calculus as Regular Expression
%B BOOK30
%K AI14 AI11 AA08
%P 430
%A Mohan M. Trivedi
%A John Gilmore
%T Guest Editorial: Applications of AI
%J MAG24
%P 331-332
%K AI16
%A David M. McKeown
%A Clifford A. McVay
%A Bruce D. Lucas
%T Stereo Verification in Aerial Image Analysis
%J MAG24
%P 333-346
%K AI06
%A W. A. Perkins
%A T. J. Laffey
%A T. A. Nguyen
%T Rule-based Interpreting of Aerial Photographs Using the Lockheed
Expert System
%J MAG24
%P 356-362
%K AI01 AI06 AA18 T03
%A Leonard P. Wesley
%T Evidential Knowledge-Based Computer Vision
%J MAG24
%P 363-379
%K AI06
%A Amar Mitiche
%A J. K. Aggarwal
%T Multiple Sensor Intergration/Fusion Through Image
Processing: a Review
%J MAG24
%P 380-386
%K AI06 AT08
%A S. M. Haynes
%A Ramesh Jain
%T Event Detection and Correspondence
%J MAG24
%P 387-393
%K AI06
%A Robert N. Nelson
%A Tzay Y. Young
%T Determining Three-Dimensional Object Shape and Orientation from
a Single Perspective View
%J MAG24
%P 394-401
%K AI06
%A Arthur V. Forman
%A J. Ronald Clark
%T Robot Vision System for Depalletizing Steel Cylindrical Billets
%J MAG24
%P 402-408
%K AI06 AI07 AA26
%A Larry S. Davis
%A Todd R. Kushner
%A Jacqueline J. Le Moigne
%A Allaen M. Waxman
%T Road Boundary Detection for Autonomous Vehicle Navigation
%J MAG24
%P 409-414
%K AA19 AI06 AI07
%A John F. Gilmore
%A Antonio C. Semico
%T Knowledge-Based Approach Toward Developing an Autonomous
Helicopter System
%J MAG24
%P 415-427
%K AA19
%A Julius T. Tou
%T Software Architecture of Machine Vision for Roving Robots
%J MAG24
%P 428-435
%K AI06 AI07
%A George R. Cross
%T Tools for Constructing Knowledge-Based Systems
%J MAG24
%P 436-444
%A Viswanath Subramanian
%A Gautam Biswas
%A James C. Bezdek
%T Document Retrieval Using a Fuzzy Knowledge Based System
%J MAG24
%P 445-455
%K AA14 O04
%A S. L. Hardt
%A J. Rosenberg
%T Developing an Expert Ship Message Interpreter: Theoretical and
Practical Conclusions
%J MAG24
%P 456-464
%K AI01
%A S. W. Thomas
%A R. L. Griffith
%A W. R. McDonald
%T Improvements in Avalanche-Transistor Sweep Circuitry for Electro-Optic
Streak Cameras
%J MAG24
%P 465-470
%K AI06
%A R. W. Austin
%T Spectral Dependence of the Diffuse Attenuation Coefficient of Light in
Ocean Waters
%J MAG24
%P 471-479
%K AI06
%A R. L. Cohoon
%A C. S. Wright
%A W. J. Wiley
%A Peter S. Guilfoyle
%A E. L. Ligeti
%T Acousto-Optic Convolver for Digital Pulses
%J MAG24
%P 480-489
%K AI06
%A O. Kafri
%A B. Ashkenazi
%T Line Thinning Algorithm for Nearly Straight Moire Fringes
%J MAG24
%P 495-498
%K AI06
%A John A. Saghri
%A Hsieh S. Hou
%A Andrew G. Tescher
%T Personal Computer Based Image Processing with Halftoning
%J MAG24
%P 499-504
%K AI06 H01
%A N. S. Kopeika
%A A. N. Sidman
%A Its'hak Dinstein
%A C. Tarnasha
%A R. Amir
%A Y. Biton
%T How Weather Affects Seeing Through the Atmosphere
%J MAG24
%P 505
%K AI06
%A Quan Quan Gao
%T Prolog-F System
%J Chinese Journal of Computing
%V 8
%D 1985
%N 2
%P 152-155
%K T02
%X (in chinese)
%A V. N. Vapnik
%A T. G. Glazkova
%A V. A. Koscheev
%A A. I. Mikhal'skii
%A A. Ya Chervonenkis
%T Algorithms and Programs for Reconstructing Dependencies
%J Nauka
%D 1984
%X (in Russian)
%A Bernd Kramer
%T Stepwise construction of Nonsequential Software Systems
Using a Net-Based Specification Language
%B Advances in Petri Nets
%V 188
%S Lecture Notes in Computer Science
%I Springer-Verlag
%C Berlin-Heidelberg-New York
%D 1985
%P 307-330
%K AA08
%A U. W. Lipeck
%T Specifying Admissibility of Dynamic Database
Behavior Using Temporal Logic
%B Information Systems: Theoretical and Formal Aspects
%P 145-157
%D 1985
%I North-Holland
%C Amsterdam-New York
%K AA08 AI10
%A Udo Pletat
%T A Graph Theoretic Semantics for Semantic Data Models
%B Information Systems: Theoretical and Formal Aspects
%P 95-108
%D 1985
%I North-Holland
%C Amsterdam-New York
%K AI16
%A L. I. Rozonoer
%T Supplement to the Paper: "Proving Contradictions in Formal Theories. I"
%J Avtomat. i Telemekh.
%D 1985
%N 4
%P 172
%K AI11
%X (in Russian)
%A L. I. Rozonoer
%T Proving Contradictions in Formal Theories
%J Automat. Remote Control
%V 44
%D 1983
%N 6
%P 781-790
%K AI11
%A V. A. Antonyuk
%A N. V. Bulygina
%A P. Yu Pyt'ev
%T Methods of Morphological Analysis in a Problem of Distinguishing
Objects
%B BOOK31
%P 83-91
%K AI06
%X (in Russian)
%A V. A. Bazhanov
%T Godel's Theorem and the Problem of the Relation Between Natural
and Artificial Intelligence
%B BOOK32
%P 49-59
%K AI16
%X (In Russian)
%A Henryk Biesiada
%T Modification of Methods for Computing the Growth Function of a
Developmental System in the Case of a Complex Start Chain
%J Podstawy Sterowania
%V 15
%D 1985
%N 1-2
%P 113-135
%A Agneta Eriksson
%A Anna Lena Johansson
%T Computer Based Synthesis of Logic Programs
%B BOOK33
%P 105-115
%K AA08 AI10 O02
%A T. I. Ibragimov
%T Cybernetics and Natural Languages
%B BOOK32
%P 59-73
%K AI02
%X (in russian)
%A I. M. Israilov
%T Formulas for Calculating Estimates in Algorithms with
Complex Systems of Support Sets
%J Zh. Vyschisl. Mat. i. Mat. Fiz
%V 25
%D 1985
%N 8
%P 1268-1272
%K AI16
%X (in Russian)
%A D. I. Panyushev
%A D. K. Tkhabisimov
%A D. A. Usikov
%A N. G. Chebotarev
%T Mathematical Bases for the Construction of Systems
of Invariant Criteria in a Pattern Recognition Problem
%B BOOK31
%P 11-23
%K AI06
%X (in Russian)
%A Marco Belia
%A Pierpaolo Degano
%A Giorgio Levi
%A Enrico Dameri
%A Maurizio Martelli
%T Applicative Communicating Processes in First Order Logic
%B BOOK33
%P 1-14
%K AA08 AI11
%A Ernesto J. F. Costo
%T Automatic Program Transformation Viewed as Theorem Proving
%B BOOK33
%P 37-46
%K AA08 AI11
%A Yu. P. Pyt'ev
%T Problems of Morphological Analysis of Images
%B BOOK31
%P 41-83
%K AI06
%A E. L. Lawler
%T The Traveling Salesman Problem
%I John Wiley and Sons
%C Somerset, NJ
%K AT15
%X $64.95 1-90413-9 465 pages
%A J. Gold
%T Do-It-Your-Self Expert Systems
%J Computer Decisions
%V 18
%N 2
%D JAN 14, 1986
%K AI01
%A D. Harel
%A R. Sherman
%T Propositional Dynamic Logic of Flowcharts
%J Information and Control
%V 64
%N 1-3
%D JAN-MAR 1985
%P 119-135
%K AA08 AI11
%A Esko Ukkonen
%T Algorithms for Approximate String Matching
%J Information and Control
%V 64
%N 1-3
%D JAN-MAR 1985
%P 100-118
%A E. M. Scharf
%A N. J. Mandic
%T The Application of a Fuzzy Controller to the Control of a
Multi-Degree-of-Freedom Robot Arm
%B BOOK34
%P 41-62
%K AI07 O04
%A O. Yagishita
%A O. Itoh
%A M. Sugeno
%T Application of Fuzzy Reasoning to the Water Purification
Process
%B BOOK34
%P 19-40
%K O04 AA05
%A M. Sugeno
%A K. Murakami
%T An Experimental Study on Fuzzy Parking Control Using
a Model Car
%B BOOK34
%P 125-138
%K O04 AA19
%A K. Matsushima
%A H. Sugiyama
%T Human Operators Fuzzy Model in Man-Machine System with a
Nonlinear Controlled Object
%B BOOK34
%P 175-186
%K O04 AI08
%A H. Zhao
%A M. C. Ma
%T The Application of Fuzzy and Artificial Intelligence Methods
in the Building of a Blast Furnace Smelting Process Model
%B BOOK34
%P 241
%K O04 AA05
%A Immo O. Kerner
%T Logical Programming. History and Present Usage
%J Elektron. Informationsverarb. Kybernet
%J 21
%D 1985
%N 7-8
%P 355-361
%K AI10
%A B. J. Oommen
%A M. A. L. Thathachar
%T Multiaction Learning Automata Possessing Ergodicity of the Mean
%J Information Science
%V 35
%N 3
%P 183-198
%K AI12 AI04
%A Ewa Orlowska
%T Logic Approach to Information Systems
%J Fund. Inform.
%V 8
%D 1985
%N 3-4
%P 359-378
%K AA08 AI10
%A Wen Jun Wu
%T Some Remarks on Mechanical Theorem-proving in Elementary Geometry
%J Acta Math. Sci (English Ed.)
%V 3
%D 1983
%N 4
%P 357-360
%K AI11 AA13
%A Vladimir Batagelj
%T Notes on the Dynamic Clusters Method
%B IV Conference on Applied Mathematics
%P 139-146
%D 1985
%X Univer. Split, Split 1985
%A Mirko Khvanek
%T A Note on the Computational Complexity of Hierarchical Overlapping
Clustering
%J Apl. Mat.
%V 30
%D 1985
%N 6
%P 453-460
%A E. Yu Kandrashina
%T Means of Representing Temporal Information in Knowledge Bases
%J Engineering Cybernetics
%V 22
%D 1985
%N 6
%P 89-95
%K AI16
%A George J. Klir
%T Architecture of Systems Problem Solving
%I Plenum Press
%C New York-London
%D 1985
%K AT15
%X 540 pages ISBN 0-306-41867-3
%A D. V. Kochetkov
%T Construction of Correct Pattern Recognition Algorithms in Quasicomplete
Models
%J Trudy Inst. Vychisl. Mat. Akad. Nauk Gruzin SSR
%V 25
%D 1985
%N 2
%P 35-44
%K AI06
%X (in Russian)
%A V. E. Vol'fengagen
%A V. Ya Yatsuk
%T Models and Methods for Representing Knowledge Algebra on Knowledge-
Manipulation Frames
%J Engineering Cybernetics
%V 22
%D 1985
%N 6
%P 79-88
%K AI16
%A V. V. Zadorozhnyi
%T Algorithms for Calculating Estimates for Pattern Recognition
%J Kibernetika (Kiev)
%D 1985
%V 1
%P 103-107
%K AI06
%X (in Russian with English Summary)
%A A. N. Chetaev
%T Neural Nets and Markov Chains
%I Nauka
%C Moscow
%D 1985
%K AI12 AT15
%X (in Russian with English Summary)
%A Irwin R. Goodman
%A Hung T. Nguyen
%T Uncertainty Models for Knowledge Based Systems. A Unified
Approach to the Measurement of Uncertainty
%I North Holland
%C Amsterdam-New York
%D 1985
%K AT15 O01
%A Eugene C. Freuder
%T A Sufficient Condition for Backtrack-Bounded Search
%J JACM
%V 32
%D 1985
%N 4
%P 755-761
%K AI03
%A J. L. Lassez
%A Michael John Maher
%T Optimal Fixed-Points of Logic Programs
%J Theoretical Computer Science
%V 39
%N 1
%D 1985
%P 15-25
%K AI10
%A Rama Chellapa
%A Shankar Chatterjee
%T Classification of Textures using Gaussian Markov Random Fields
%J IEEE Transactions Acoust. Speech Signal Process.
%V 33
%D 1985
%N 4
%P 959-363
%K AI06
%A I. N. Krupka
%A Yu. I. Petunin
%A M. Yu Petunina
%T Determination of the Similarity of Two Graphic Images by Menas of the
Hausdorff Distance
%J Kibernetika (Kiev)
%D 1985
%N 3%V 1
%P 118-120
%K AI06
%X Russian. English Summary
%A V. A. Nepomnyaschii
%T Elimination of Loop Invariants in Program Verification
%J Programmirovanie
%D 1985
%N 3
%P 3-13
%K AA08
%X in Russian
%A Van Nguyen
%A Alan Demers
%A David Gries
%A Susan Owicki
%T Behavior: a Temporal Approach to Process Modeling
%B BOOK35
%P 237-254
%K AA08
%A Van Nguyen
%T The Incompleteness of Misra and Chandy's Proof Systems
%J Information Processing Letters
%V 21
%D 1985
%N 2
%P 93-96
%K AA08
%A Rohit Parikh
%A Ashok Chandra
%A Joe Halpern
%A Albert Meyer
%T Equations Between Regular Terms and an Application
to Process Logic
%J SIAM J. Computers
%V 4
%D 1985
%N 4
%P 935-985
%K AI10
%A Alex Pelin
%T A Formalism for Treating Equivalence of Recursive Procedures
%J RAIRO Inform. Theor.
%V 19
%D 1985
%N 3
%P 293-313
%K AI10
%A Paul Walton Purdom
%A Cynthia A. Brown
%T The Pure Literal Rule and Polynomial Average Time
%J SIAM J. Comput
%V 14
%D 1985
%N 4
%P 943-953
%K AI14
%A I. Sain
%T The Reasoning Powers of Burstall's (Modal Logic) and
Pneueli's (Temporal Logic) Program Verification Methods
%B BOOK35
%P 302-319
%K AA08 AI10 AI11
%A A. E. Serik
%T Some Exact and Approximate Algorithms for Solution of Some
Sequencing Problems with Constraints
%J Kibernetika (Kiev)
%D 1985
%N 3
%P 29-33
%K AI16
%X (Russian with English Summary)
%A Kurt Sieber
%T A Partial Correctness Logic for Procedures
%B BOOK35
%P 320-342
%K AA08
%A A. E. K. Sobel
%A N. Soundararajan
%T A Proof System for Distributed Processes
%B BOOK35
%P 343-358
%K AA08
%A Robert S. Streett
%T Fixpoints and Progam Looping:
Reductions from the Propositional Mu-Calculus into
Propositional Dynamic Logics of Looping
%B BOOK35
%P 359-372
%K AA08 AI11
%A S. F. Shapiro
%T Electronic Assembly Becoming Dependent on Robotic Tools
%J Computer Design
%V 25
%N 3
%D FEB 1, 1986
%K AI07 AA04 AA26
%A Douglas C. Willson
%T Current Research, Applications Foreshadow AI's Future Impact
%J Data Management
%V 24
%N 2
%D FEB 1986
%P 18-19
%A Paul V. Besl
%A Ramesh C. Jain
%T Invariant Surface Characteristics for 3D Object Recognition in Range
Images
%J Computer Vision, Graphics and Image Processing
%V 33
%N 1
%D JAN 1986
%P 33-80
%K AI06
%A Marloes L. P. Van\ Lierop
%T Geometrical Transformations on Pictures Represented by Leaf Codes
%J Computer Vision, Graphics and Image Processing
%V 33
%N 1
%D JAN 1986
%P 81-98
%K AI06
%A Eric P. Krotkov
%T Visual Hyperacuity: representation and Computation of High Precision
Position Information
%J Computer Vision, Graphics and Image Processing
%V 33
%N 1
%D JAN 1986
%K AI06
%A G. Eichmann
%A L. M. Royfman
%T New Algorithm for Transient Suppression for Images Due to Incomplete or
Partial Boundary Data
%J IEE Proceedings G: Electronic Circuits
%V 133
%N 1
%D FEB 1986
%P 27-29
%K AI06
%A L. F. Huggins
%A J. R. Burrettt
%A D. D. Jones
%T Expert Systems - Concepts and Opportunities
%J Agricultural Engineering
%D JAN-FEB 1986
%V 67
%N 1
%P 21-23
%K AA23 AA05 AI01
%A D. A. Lowther
%A C. M. Saldhana
%A G. Choy
%T The Applications of Expert Systems to CAD in Electromagnetics
%J IEEE Transactions on Magnetics
%V 21
%N 6
%D 1985
%P 2559-2563
%K AA04 AI01
------------------------------
End of AIList Digest
********************
∂12-Jun-86 0201 LAWS@SRI-AI.ARPA AIList Digest V4 #148
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 12 Jun 86 02:01:21 PDT
Date: Wed 11 Jun 1986 23:22-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #148
To: AIList@SRI-AI
AIList Digest Thursday, 12 Jun 1986 Volume 4 : Issue 148
Today's Topics:
Queries - Tools for RSX & Organic Chemistry &
Russian Paper on Sequencing Problems & Scheme &
Neural Nets & Complexity Theory &
Creativity and Analogy & AI and Education
----------------------------------------------------------------------
Date: Mon 9 Jun 86 09:22:49-PDT
From: JPENNINO@USC-ECL.ARPA
Subject: TOOLS FOR RSX??
Does anyone know of any ai tools/languages that run under RSX other
than the two versions of LISP in DECUS?
------------------------------
Date: Tue, 10 Jun 86 13:24 EDT
From: John Batali <BATALI@OZ.AI.MIT.EDU>
Subject: AI & Organic Chemistry
I'd like to find out about any AI projects attempting to hack organic
chemistry. I would be interested in information about systems which do
inorganic and biochemistry also. I know about DENDRAL. Please reply to
me and I will collect results and send them to the list.
John Batali
BATALI@OZ.AI.MIT.EDU
------------------------------
Date: Tue, 10 Jun 86 11:10 EST
From: MUKHOP%RCSJJ%gmr.com@CSNET-RELAY.ARPA
Subject: Russian to English translation
I would like to obtain an English translation of:
"Some exact and Approximate Algorithms for Solution of Some
Sequencing Problems with Constraints", Kibernetika (Kiev), 1985, #3,
pp 29-33. The paper is in Russian with an English summary. I do not
have a copy of the paper. Any help will be greatly appreciated.
Uttam Mukhopadhyay
Computer Science Dept.
GM Research Labs
Warren, MI 48090-9057
(313)575-2105
Net address: mukhop@gmr.com
------------------------------
Date: Tue 10 Jun 86 08:29:38-PDT
From: Mark Richer <RICHER@SUMEX-AIM.ARPA>
Subject: Scheme, anyone?
I have been asked to give advice regarding the appropriateness of using
Scheme for a development effort in Intelligent Computer Assisted Instruction.
Although this is partly a research effort also, a clear goal is testing
and installing the software in high school classrooms. The hardware available
to this project is Hewlett-Packward workstations.
Admittedly I know little about Scheme. However, my initial reaction is that
no advantages Scheme could provide over CommonLisp could offset the
disadvantages of using a language without a large user base for the
purposes of software development and installation. CommonLisp
promises to offer portability (of course there are still problems, e.g.,
graphics) and a large user community, and has other obvious advantages
because of the general acceptance of Lisp in the U.S. AI community.
I'd appreciate some feedback from people that are familiar with Scheme,
particularly if you have used it for developing a large AI-based system.
Can any argument be presented to justify the resources necessary to train
people in Scheme and build and maintain a system in this UnCommonLispLike
language? In other words, what is so special about Scheme compared to
CommonLisp?
Mark
------------------------------
Date: 10-Jun-1986 1436
From: cherubini%cookie.DEC@decwrl.DEC.COM
Subject: Neural Nets
I am interested in doing some modelling using neural nets. Before
building the software system myself, I would like to know of any
available public domain software systems which implement neural
nets, Boltzmann machines, etc. Any pointers would be appreciated.
Ralph Cherubini
Digital Equipment Corporation
------------------------------
Date: 9 Jun 1986 1735-EDT
From: Bruce Krulwich <KRULWICH@C.CS.CMU.EDU>
Subject: connectionism/complexity theory
June 2nd's issue of Business Week contained an article about
connectionist (parallel distributed processing) models. In it it
mentioned a Bell Labs project which set up a neural network which solved
the traveling salesman problem aproximately but quickly. I'm interested
in articles or other information about this project or any other project
linking connectionism with complexity theory, ie, connectionist
approaches to graph problems or models which solve other "classical"
algorithm design problems.
Bruce Krulwich
ARPAnet: KRULWICH@C.CS.CMU.EDU
Bitnet: BK0A%TC.CC.CMU.EDU@CU20B
------------------------------
Date: Tue, 10 Jun 86 14:04 EST
From: MUKHOP%RCSJJ%gmr.com@CSNET-RELAY.ARPA
Subject: Creativity and Analogy
At a recent talk in Ann Arbor, Roger Schank observed/implied that
a distinct characteristic of many creative people is the ability to
analogize. My understanding of analogizing is to define transformations
between two domains so that entities and relationships in one domain
can be mapped into corresponding entities and relationships in the
other domain. It appears that the greater the disparity in the "physics"
of the two domains, the higher is the creative effort demanded.
Not all transformations produce interesting results. Good analogies
must be interesting from the perspective of the particular creative
activity.
Is this model of creativity--making interesting analogies--valid
across the spectrum of creative actvities, from the hard sciences
(Physics, Chemistry, etc.) to the fine arts (painting, music)?
Is there more to creativity than making interesting analogies? I am
inclined to believe that making interesting analogies is at the heart
of all intelligent activity that is described as creative.
Uttam Mukhopadhyay
General Motors Research Labs.
(313)575-2105
Net address: mukhop@gmr.com
------------------------------
Date: Tue 10 Jun 86 09:38:50-PDT
From: Mark Richer <RICHER@SUMEX-AIM.ARPA>
Subject: AI and Education questionnaire
Below is a questionnaire requesting information from researchers who are
interested in the application of Artificial Intelligence in education.
If you are working in this area or are interested in this area please look
at the questionnaire and fill it out. [...] You can also fill out
the questionnaire on-line and return it by email to the address provided
below.
This questionnaire is part of a larger effort to facilitate communication
among researchers in this area. We are also maintaining a list of postal
addresses of those people that are interested in joining a special interest
group in AI and education. One activity planned is a special interest group
meeting at AAAI '86 this August in Philadelphia. An annoucement of this
meeting will be forthcoming.
AI and Education Questionnaire
prepared 10 June 1986 by W. Lewis Johnson and Mark H. Richer
Please send your responses to:
W. Lewis Johnson
USC ISI
4676 Admiralty Way
Marina del Ray, CA 90292
or email to JOHNSON@ISI-VAXA.ARPA
(1) Name:
(2) Institution or Company:
(3) Street Address:
(4) City, State [or Country], Zip Code:
(5) Work Phone(s):
(6) E-Mail address(es):
(7) Are you interested in membership in an AI and Education group if one is
officially formed?
(8) What kind of organization(s) are you connected with? (Check one or more)
1. academic research laboratory
2. academic software development center
3. industrial or commercial research laboratory
4. commerical software company
5. educational institution (please explain)
6. government or military research & development
7. other (please specify)
(9) Please characterize your interest and involvement in AI and Education.
Please check one and elaborate.
1. I am currently building an AI-based instructional system. (Please
describe)
2. I am planning to build an AI-based instructional system. (Please
describe)
3. I'm not currently planning to build an instructional system, but I
want to keep abreast of developments in the field. (Why?)
4. I'm generally curious about the field. (Why?)
(10) Please list the subject areas that interest you (e.g., arithmetic, medical
diagnosis, auto mechanics, etc.).
(11) Is your work targeted to a specific student population? If so, please
indicate which.
1. pre-school or elemementary school students
2. junior high school or high school students
3. disabled or special students
4. college students
5. post-graduate or professional students
6. vocational trainees
7. military training
8. industrial training
9. other (please describe)
(12) Which do you consider to be among your MOST central interests?
1. authoring tools or environments (general architectures)
2. diagnosis of student errors and misconceptions
3. educational games
4. explanation and knowledge transfer techniques
5. designing curricula that uses AI-based systems
6. interactive video or CD-ROM
7. micro-worlds or learning environments
8. natural language
9. representation and codification of domain knowledge for the purpose
of instruction
10. representation and codification of general problem-solving knowledge
for the purpose of instruction
11. representation and codification of teaching knowledge for the
purpose of instruction
12. student modeling
13. tutorial strategies
14. user-interfaces (including use of computer graphics in general)
15. user-modeling (for explanation, on-line contextual help, user-
interfaces)
16. voice recognition/synthesis
17. other (please specify)
(13) Which of the following would you like to see a special interest group in
AI and Education offer? (0=not important, 1=important, 2=very important)
1. electronic discussion list
2. bibliographic references without abstracts/reviews
3. bibligraphic references with abstracts/reviews
4. annual meeting at AAAI
5. periodic focused workshops
6. high quality feedback on paper drafts, proposals, ideas, etc.
7. job announcements
8. other:
------------------------------
End of AIList Digest
********************
∂16-Jun-86 0108 LAWS@SRI-AI.ARPA AIList Digest V4 #149
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 16 Jun 86 01:08:03 PDT
Date: Sun 15 Jun 1986 23:18-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #149
To: AIList@SRI-AI
AIList Digest Monday, 16 Jun 1986 Volume 4 : Issue 149
Today's Topics:
Seminars - Possible Worlds Planning (SRI) &
Automatic Expert System Induction (NASA Ames) &
Learning by Selection (CMU) &
Connectionist Knowledge Representation System (CMU) &
Object Recognition using Category Models (UPenn) &
CODER Information Retrieval (VPI),
New Society - Bay Area AI and Education Meeting
----------------------------------------------------------------------
Date: Wed 11 Jun 86 11:27:30-PDT
From: Amy Lansky <LANSKY@SRI-WARBUCKS.ARPA>
Subject: Seminar - Possible Worlds Planning (SRI)
POSSIBLE WORLDS PLANNING
Matt Ginsberg (SJG@SAIL)
Stanford University
11:00 AM, MONDAY, June 16
SRI International, Building E, Room EJ228 (new conference room)
The size of the search space is perhaps the most intractable of all of
the problems facing a general-purpose planner. Some planning methods
(means-ends analysis being typical) address this problem by
encouraging the system designer to give the planner domain-specific
information (perhaps in the form of a difference table) to help govern
this search.
This paper presents a domain-independent approach to this problem
based on the examination of possible worlds in which the planning goal
has been achieved. Although a weak method, the ideas presented lead
to considerable savings in many examples; in addition, the natural
implementation of this approach has the attractive property that
incremental efforts in controlling the search provide incremental
improvements in performance. This is in contrast to many other
approaches to the control of search or inference, which may require
large expenditures of effort before any benefits are realized.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
------------------------------
Date: Thu, 12 Jun 86 00:18:33 pdt
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Seminar - Automatic Expert System Induction (NASA Ames)
Subject: June 17, 1986, NASA Ames AI Forum, Automatic Induction
National Aeronautics and Space Administration
Ames Research Center
AMES AI FORUM
SEMINAR ANNOUNCEMENT
SPEAKER: Dr. Peter Cheeseman
Information Sciences Office
NASA Ames Research Center
TOPIC: Automatic Induction of Probabilistic Expert Systems
Many have realized that expert systems that make decisions under uncertainty
must represent this uncertainty and manipulate it correctly. This cannot be
done in general by "symbolic" (i.e. non-numeric) methods or by sprinkling
numbers over logical inference, as advocated by many authors in AI. Probability
has been proved to be the only consistent inference scheme if uncertainty is
represented by a real number. Probabilistic inference requires assessing the
effect of ALL the relevant evidence on the hypothesis of interest through ALL
the possible chains of inference (rather than establishing a single path from
axioms to theorem, as in logic). However, some methods used in probabilistic
inference in AI (e.g. Prospector) impose strong constraints on the structure of
the information (e.g. conditional independence) or require large amounts of
information. The solution to this problem is to use Maximum Entropy to spread
the uncertainty over the set of possibilities as evenly as possible consistent
with the known information. A computationally efficient method for performing
the maximum entropy calculation will be presented as well as a method for
extracting the necessary probabilistic information directly from data. The
result is a complete probabilistic expert system without using an expert.
DATE: Tuesday, TIME: 10:30-11:30 am BLDG. 239 Room B39
June 17, 1986 (Basement Conf. Room)
POINT OF CONTACT: Alison Andrews PHONE NUMBER: (415)694-6741
NET ADDRESS: mer.andrews@ames-vmsb.ARPA
VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18. Do not
use the Navy Main Gate.
Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance. Submit requests to the point of
contact indicated above. Non-citizens must register at the Visitor
Reception Building. Permanent Residents are required to show Alien
Registration Card at the time of registration.
------------------------------
Date: 12 June 1986 1156-EDT
From: Richard Wallstein@A.CS.CMU.EDU
Subject: Seminar - Learning by Selection (CMU)
The CMU Summer Research Seminar Series continues this Friday, June 13 at
2:30 PM, 7500 WeH with a talk by Geoffrey Hinton on his new research:
A New Algorithm for Learning by Selection
Imagine a complicated non-linear process that contains specific steps that are
controlled by switches which can be on or off. Each switch has a particular
stored probability of being on. Using these probabilities, we generate a
random combination of switch settings and then run the process and decide
whether the result is good or bad. I shall describe a new learning algorithm
that uses information about the goodness of the outcomes to revise the stored
probabilities associated with the switches. The algorithm is guaranteed to
change the switch probabilites in such a way that future random combinations of
switch settings are more likely to produce good outcomes. It can be applied to
stochastic processes of arbitrary complexity. If each switch is a synapse, it
suggests a new model of learning in the cortex. If each switch is an enzyme
and its stored probability is the relative frequency of the relevant gene in
the gene pool, the learning algorithm is an efficient way of using the
information provided by survival to optimize gene frequencies. The extension to
optimizing frequencies of gene combinations appears to be feasible.
------------------------------
Date: 11 Jun 86 01:17:06 EDT
From: Mark.Derthick@g.cs.cmu.edu
Subject: Seminar - Connectionist Knowledge Representation System (CMU)
I will present my thesis proposal, "A Connectionist Knowledge Representation
System," 2pm Wednesday, June 18, in 5409.
I propose to develop a knowledge representation system that is functionally
similar to KL2, but implemented on a parallel, non-symbolic architecture.
Answering queries is carried out by a Boltzmann Machine
network in which concepts, roles, and individuals are represented by
patterns of activity of very simple processing units. By choosing good
representations, a small network suffices to capture the knowledge as
pairwise interactions among the units in the network. A single parallel
constraint satisfaction search accomplishes the answering process. I will
prove that for any definable knowledge base, the network constructed will
answer queries as specified by the formal knowledge level semantics.
------------------------------
Date: Wed, 11 Jun 86 14:05 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Object Recognition using Category Models (UPenn)
OBJECT RECOGNITION USING FUNCTION BASED CATEGORY MODELS
Ph. D. Thesis Proposal
Franc Solina
GRASP Laboratory
UNIVERSITY of PENNSYLVANIA
Department of Computer and Information Sciences
Philadelphia, PA 19104-6389
Phone (215) 898 8298
Net address: franc@upenn
We propose a modeling system for recognition of generic
objects. Based on the observation that fulfilling of the
same function results in similar shapes we will consider
object categories that are formed around the principle of
functionality. The representation consists of a prototypi-
cal object represented by prototypical parts and relations
between these parts. Parts are modeled by superquadric
volumetric primitives which are combined via boolean opera-
tions to form objects. Variations between objects within a
category are described by allowable changes in structure and
shape deformations of prototypical parts. Each prototypical
part and relation has a set of associated features that can
be recognized in the images. The recognition process
proceeds as follows; the input is a pair of stereo reflec-
tance images. The closed contours and sparse 3-D points,
the result of low level vision, are analyzed to find domain
specific features. These features are used for indexing the
model data base to make hypotheses. The selected hypotheses
are then verified on the geometric level by deforming the
prototype in allowable way to match the data. We base our
design of the modeling system upon the current psychological
theories of the human visual perception.
advisor: R. Bajcsy
commitee: N. Badler, H. ElGindy, J. Kender (Columbia University).
Time: Monday, June 16, 11 PM, room 216
------------------------------
Date: Tue, 27 May 86 10:31:37 edt
From: vtcs1::fox
Subject: Seminar - CODER Information Retrieval (VPI)
[Forwarded from IRList Digest V2#26 by Laws@SRI-AI.]
The M.S. defense of Robert K. France will be held at 10am Monday June 2 in
Norris 301. The title of his thesis is "An Artificial Intelligence Environment
for Information Retrieval Research."
The CODER (COmposite Document Expert/extended/effective Retrieval)
project is a multi-year effort to investigate how best to apply
artificial intelligence methods to increase the effectiveness of
information retrieval systems. Particular attention is being given to
analysis and representation of heterogeneous documents, such as
electronic mail digests or messages, which vary widely in style,
length, topic, and structure. In order to ensure system adaptability
and to allow reconfiguration for controlled experimentation, the
project has been designed as a moderated expert system. This thesis
covers the design problems involved in providing a unified
architecture and knowledge representation scheme for such a system,
and the solutions chosen for CODER. An overall object-oriented
environment is constructed using a set of message-passing primitives
based on a modified Prolog call paradigm. Within this environment is
embedded the skeleton of a flexible expert system, where task
decomposition is performed in a knowledge-oriented fashion and where
subtask managers are implemented as members of a community of experts.
A three-level knowledge representation formalism of elementary data
types, frames, and relations is provided, and can be used to construct
knowledge structures such as terms, meaning structures, and document
interpretations. The use of individually tailored specialist experts
coupled with standardized blackboard modules for communication and
control and external knowledge bases for maintenance of factual world
knowledge allows for rapid prototyping, incremental development, and
flexibility under change. The system as a whole is structured as a
set of communicating modules, defined functionally and imple- mented
under UNIX using sockets and the TCP/IP protocol for communication.
Inferential modules are being coded in MU-Prolog; non-inferential
modules are being prototyped in MU-Prolog and will be re-implemented
as needed in C++.
Host: Dr. Edward A. Fox, Dept. of Computer Science
------------------------------
Date: Fri 13 Jun 86 11:45:06-PDT
From: Mark Richer <RICHER@SUMEX-AIM.ARPA>
Subject: New Society - Bay Area AI and Education Meeting
Date: 13 Jun 86 10:27 PDT
From: dmrussell.pa@Xerox.COM
Subject: Bay Area AI and Education Meeting: June 23rd, 6PM, PARC
What: Bay Area AI and Education Group holding its first meeting.
Where: Xerox Palo Alto Research Center (PARC)
3333 Coyote Hill Rd.
Palo Alto, CA
(send for detailed directions)
When: June 23rd, 6PM
Who:
Speakers: Jim Greeno and Peter Pirolli
"Some New Directions in the Science of
Instructional Design"
Math Science and Technology
Education Dept.
University of Calif. Berkeley
Host: Daniel Russell
Intelligent Systems Lab
PARC
Amplification:
BARRET (Bay ARea Research in Educational Technology) is an
attempt to bring together many of the local people working in the area
of applying AI to education. There are significant efforts at
Berkeley, Stanford, UCSF, SRI, PARC and so on. BARRET is a way of
establishing some communication between the various groups, by hosting
technical talks on this topic and setting aside time for informal
discussion.
To do this, BARRET will be implemented as a moving sequence of talks
circulating throughout the Bay Area on a (roughly) monthly basis. We
hope to have high quality talks on areas of mutual interest to be
followed by an equally high-quality dinner that will allow us to meet
and discuss topics further.
This first meeting of BARRET will be followed by dinner at Chef Chu's,
assuming that we can get a reasonable headcount. (With enough warning,
non-MSG-ers and veggies can be accomodated.)
So, if you are interested in attending, please message (or call) me and
let me know of your intentions. That will allow us to do some planning
for our first meeting.
-- Dan Russell --
ArpaNet: DMRussell.PA@XEROX.COM
Phone: (415)-494-4308
Mail: Dan Russell
ISL
3333 Coyote Hill Rd.
Palo Alto, CA 94304
------------------------------
End of AIList Digest
********************
∂16-Jun-86 0315 LAWS@SRI-AI.ARPA AIList Digest V4 #150
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 16 Jun 86 03:15:41 PDT
Date: Sun 15 Jun 1986 23:23-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #150
To: AIList@SRI-AI
AIList Digest Monday, 16 Jun 1986 Volume 4 : Issue 150
Today's Topics:
Seminars - Modular Construction of Logics for Specification (CMU) &
Dependent Types (MIT) &
Programming Languages & Temporal Knowledge (Edinburgh),
Conference - APS Workshop at AAAI-86 &
Temporal Aspects in Information Systems &
Symposium on Connectionism
----------------------------------------------------------------------
Date: 10 June 1986 1542-EDT
From: Theona Stefanis@A.CS.CMU.EDU
Subject: Seminar - Modular Construction of Logics for Specification (CMU)
PS SEMINAR
Date: Friday, 20 June
Time: 10:00
Place: WeH 4605
Modular Construction of Logics for Specification
Martin Sadler
Imperial College, London
mrs@@doc.ic.ac.uk
A typical informal presentation of a logic for reasoning
about some aspect of computing is:
Nice logic = First-order logic + Temporal bit
We can ask two questions about this equation. Firstly, what
is going on with the '+' and other similar combinators?
Secondly, how do we guarantee that such equations are well
behaved - in the sense that the logics we build will support
the ideas of specification and stepwise refinement?
To answer these questions one needs to have a formal
framework for talking about logics. Our preference is for a
proof theoretic framework. Crudely:
Logic "=" presentation of a consequence relation
Combinator "=" function of type: logic* -> logic
Modularity principle "=" interchange principle
between combinators
One important kind of combinator that has not received
the attention it deserves is a 'talksabout' combinator that
gives one a meta-level mechanism with respect to the logic
it is applied to. Together with the observation that canon-
ical "arrow" logics can be built on the collections of vari-
ous kinds of preserving maps between logics, we can start
talking about logics as solutions to "logic-equations":
LOGIC = talksabout(logic)
+ talksabout(nice←logic)
+ talksabout(nice←logic
-> implementation←logic)
The seminar will attempt to show how such a framework
can be used, as part of an interactive environment, to sup-
port software engineers in setting up logics for specifica-
tion and verification.
------------------------------
Date: Tue 10 Jun 86 14:45:38-EDT
From: Lisa F. Melcher <LISA@XX.LCS.MIT.EDU>
Subject: Seminar - Dependent Types (MIT)
Date: Thursday, June 19, 1986
Time: 2:45 p.m......Refreshments
3:00 p.m......Lecture
Place: NE43 - 512A
"DEPENDENT TYPES -- FIFTEEN YEARS LATER"
J.Y.GIRARD
University of Paris VII
Our system F of polymorphic lambda calculus (developed independently by
Reynolds) is attracting increasing interest because of its relation to
polymorphic types in programming, although our original motivation for
studying the system was quite different. In this talk we summarize the basic
theoretical properties of the type system and compare the computer
scientists' and logicians' views of it.
Sponsored by TOC, Laboratory for Computer Science
Albert Meyer, Host
------------------------------
Date: Fri, 6 Jun 86 18:00:06 -0100
From: Gideon Sahar <gideon%edai.edinburgh.ac.uk@Cs.Ucl.AC.UK>
Subject: Seminars - Programming Languages & Temporal Knowledge (Edinburgh)
EDINBURGH AI SEMINARS
Date: Wednesday 28th May l986
Place: Department of Artificial Intelligence
Seminar Room
Forrest Hill
EDINBURGH.
Dr. M. Steedman, Centre for Cognitive Sciences and Department of Artificial
Intelligence will give a seminar entitled - "Combinators, Universals and
Natural Language Processing".
Combinators are primitive elements in terms of which we can define the notion
of defining a function, as with the lambda operator of LISP, without the use
of the bound variables which are associated with that operator, and which are
so expensive for interpreters of LISP and related functional programming
languages. For some time, my colleagues and I have been arguing that the
syntax and semantics of certain problematic "unbounded dependencies" and
"reduced" constituents in natural language constructions such as English
relative clauses and coordinate constructions can be elegantly captured by
extending Categorial Grammars (discussed by Ewan Klein here a couple of months
ago) with operations corresponding to certain simple combinators. Such
grammars hold out the promise of a theory according to which natural language
syntax is a very direct reflection of a computational efficient applicative
semantics which minimises the use of bound variables. The paper concerns
some implications for processing and the prediction of certain contrasts
between the grammars of Spanish and English.
Date: Wednesday, 4th June l986
Time: 2.00 p.m.
Place: Department of Artificial Intelligence,
Seminar Room,
Forrest Hill,
EDINBURGH.
Professor Colin Bell, University of Iowa will give a seminar entitled -
``A Point-Based Representation of Temporal Knowledge in Automated
Project Planning".
A point-based temporal reasoning system is presented as an alternative
to existing interval-based temporal logics. It appears to be
especially applicable in nonlinear hierarchical planning where such
temporal quantities as activity durations and scheduling delays are
uncertain. Temporal constraints representable in this system fall into
a very restricted class. However, it is argued that representing more
general constraints results in computational intractability. Details
of implementation are discussed.
Date: Wednesday, 11th June l986
Time: 2.00 p.m.
Place: Department of Artificial Intelligence,
Seminar Room F10,
80 South Bridge,
EDINBURGH.
Mr. Peter Jackson, Department of Artificial Intelligence, University of
Edinburgh will give a seminar entitled - ``Towards a Methodology for
Designing Problem Solving Architectures in the Object-Oriented Style".
Although current object-oriented systems provide the programmer with both
software modules (such as production rule interpreters and theorem provers)
and software tools (such as browsers and debuggers), they fail to provide a
set of guidelines as to how to select and combine modules to create a
particular architecture. Too often, one is given some combination of
Flavors, OPS and Prolog (or their look-alikes), and then left to get on with
it. A further criticism is that the modules provided do not lend themselves
to adaptation by specialization in the spirit of the object-oriented
environment in which they are embedded.
A methodology for creating 'abstract architectures', which can be
instantiated via a process of specialization, is described in the context of
a new object-oriented programming language called SLOOP. A detailed example
is given of how to create a generic production rule architecture whose
behaviour is easy to modify incrementally, together with a sample problem
solving program. It is suggested that certain features of SLOOP, namely
its transparency and the fact that it is mostly implemented in itself, make
it particularly useful as a vehicle for tasks of this kind, while some of
the facilities offered, such as pattern-matched parameter-passing and the
ability to compile SLOOP into Lisp and thence into native code, encourage a
functional style of programming without extracting too high a price in terms
of efficiency.
------------------------------
Date: Thu, 12 Jun 86 15:17:26 edt
From: als@mitre-bedford.ARPA (Alice L. Schafer)
Subject: Conference - APS Workshop at AAAI-86
---> The cutoff date for receiving a request for participation in the
Workshop on Automatic Programming at the AAAI-86 was accidentally omitted
from the notice. While the original date was June 15, we will extend
it to June 30 to give people sufficient time to respond.
...
The workshop will be held on Thursday August 14th, and will last
approximately three hours. The current plan is that one and a half hours will
be occupied by brief (seven minutes) presentations of current work, followed
by a panel discussion with active audience participation, moderated by
Tom Cheatham of Harvard. Due to the size of the available rooms, we
may have to limit the audience to researchers who have experience with
some aspect of the APS problem.
If you wish to present your current work or be on the panel you should
send us a 200-800 word abstract. The decision on who will participate will
be based on these abstracts. If you wish to participate as a member of the
audience instead, send us a short note containing a description of your work
or references to pertinent papers you have written. If we need to limit the
audience we will base our decisions on these responses, which should be sent
by June 30.
Please post a printed copy of this notice at your workplace.
Organized by:
Alice Schafer Richard Brown Richard Piazza
(617) 271-2363 (617) 271-7559 (617) 271-2363
als@mitre-bedford.arpa rhb@mitre-bedford.arpa rlp@mitre-bedford.arpa
of the Knowledge-Based Automatic Programming Project (ISFI)
The MITRE Corporation
Mail Stop A-045
Burlington Road
Bedford, MA 01730
------------------------------
Date: Mon, 9 Jun 86 12:06:07 PDT
From: Lougie Anderson <lougiea%crl.tek.csnet@CSNET-RELAY.ARPA>
Reply-to: Lougie Anderson <lougiea%tekcrl.uucp@CSNET-RELAY.ARPA>
Subject: Conference - Temporal Aspects in Information Systems
Conference Announcement
TEMPORAL ASPECTS IN INFORMATION SYSTEMS
Sophia-Antopolis, France
May 13-15, 1987
Temporal Aspects in Information Systems: A working confer-
ence by IFIP Technical Committee TC 8 "Information Systems"
in cooperation with AFCET, the French Computer Science and
Information Society.
MOTIVATION
Recent developments in the area of information systems
emphasize the role played by time. Research in information
systems design has pointed to the need for a realistic world
model which includes representations not only for snapshot
descriptions of the real world, but also for histories, on
the evolution of such descriptions over time. These
developments still suffer from a lack of concepts, languages
and theoretical foundations dealing with the design of tem-
poral and behavioral aspects of informations systems. More-
over, temporal correctness criteria and analysis are neces-
sary. In addition the management of computerized informa-
tion systems requires new mechanisms to allow the implemen-
tation and the handling of these elements. Papers can be
submitted on the following items.
TOPICS
Theoretical and Modeling Aspects of the Time Dimension of
Information: Time theory, temporal logic, causality theory,
linguistic and philosophic approaches of time. Time model-
ing, behavioral modeling, languages for specification and
query, temporal/causal dependencies and constraints, tem-
poral consistency checking.
Time and Behavior Implementation and Handling: Temporal
dimension of databases, historical databases implementation,
user interface for historical databases, snapshots, time and
behavior handling in computerized systems, time and event
mechanisms, management of multiple versions, data time
versus transaction time, concurrency and synchronization
problems.
Applications with a Temporal Dimension: Time in decision
support systems for prediction and planning achievement,
time dimension in CAD and CAM systems, in large statistical
data bases, in large socio-economic data bases, in medical
systems, and real-time systems.
GENERAL CONFERENCE CHAIRMAN
Francois Bodart
Institute Notre-Dame de la Paix
21, rue Grangagnage
500 Namur, Belgium
PROGRAMME COMMITTEE CHAIRMAN
Colette Rolland
Universite Paris I
12, place du Pantheon
75231 Paris Cedex, France
ORGANIZING COMMITTEE CHAIRMAN
Michel Leonard
Centre Universitaire d'Informatique
Universite de Geneve
24, rue du General-Dufour
1211 Geneve 4, Switzerland
PROGRAMME COMMITTEE
M. Adiba, IMAG, France
J. Allen, University of Rochester, USA
L. Anderson, Tektronix, USA
V. de Antonellis, University of Milano, Italy
G. Ariav, Tel Aviv University, Israel
F. Bodart, Institut Notre-Dame de la Paix, Belgium
J. Bubenko, University of Stockholm, Sweden
J. Clifford, New York University, USA
A. Furtado, University of Rio de Janeiro, Brazil
M. Jarke, University of Frankfurt, Germany
M. Leonard, University of Geneva, Switzerland
S. Navathe, University of Florida, USA
P. Nobecourt, University Paris I, France
A. Olive, University of Barcelona, Spain
B. Pernici, Milano Polytechnic School, Italy
U. Schiel, Federal University of Paraiba, Brazil
A. Sernadas, University of Lisbon, Portugal
HOW TO SUBMIT
Original papers in English of up to 5,000 words are sought
on topics included in, but not limited to, the proposed
list. Papers should be recieved before October 1st, 1986.
Authors should submit four copies of the full paper to:
AFCET
TAIS, Conference
156, boulevard Pereire
75017 Paris, France
IMPORTANT DATES
Papers due: October 1, 1986
Acceptance notification: December 15, 1986
Final copy due: February 15, 1987
Conference: May 13-15, 1987
------------------------------
Date: 13 JUN 86 11:38-N
From: SCHNEIDER%CGEUGE51.BITNET@WISCVM.WISC.EDU
Subject: Conference - Symposium on Connectionism
Symposium and Workshop on
CONNECTIONISM :
MULTIPLE AGENTS, PARALLELISM AND LEARNING
=================================================================
Symposium 9th of September 1986
Workshop 10th - 12th of September 1986
LOCATION Geneva University, UNI II, Switzerland
The symposium and workshop are sponsored by the Swiss Group for
Artificial Intelligence and Cognitive Science (SGAICO), the Jean
Piaget Foundation and the Faculty of Psychology and Education
Science of the University of Geneva.
Symposium Programme : SYMPOSIUM DAY : 9TH OF SEPTEMBER
At the 9th of September a one day symposium will be held on
"CONNECTIONISM : Multiple Agents , Parallelism and Learning"
where the main ideas of this paradigm in Artificial Intelligence
and Cognitive Science will be presented. The symposium is open to
the public. The goal of this symposium is to give an introduction
and survey of the problems of Connectionism.
09.00 - 10.30 THE SOCIETY THEORY OF MIND:
Marvin Minsky, MIT
10.45 - 12.00 THE LOCALIST POSITION IN CONNECTIONISM:
ON REPRESENTATION AND LEARNING
Jerome Feldman, University of Rochester
14.00 - 15.15 THE DISTRIBUTIONIST POSITION IN CONNECTIONISM:
ON REPRESENTATION AND LEARNING
Terry Sejnowski, John Hopkins University
15.30 - 16.45 LEARNING PARADIGMS IN CONNECTIONISM:
David Rumelhart, University of California
16.45 - 18.00 BUILDING WORKING CONNECTIONIST MODELS
David Waltz, Intelligent Thinking Machines, USA
Entry fees for the SYMPOSIUM: STUDENTS: SFRS 40,- ;
UNIVERSITY MEMBERS: SFRS 100,- ; INDUSTRY: SFRS 250,-
The following persons get a entry-price reduction of 20 Percent:
- Members and Students of the Faculty of Psychology and Education
Science of the University of Geneva
- Members of the Swiss Informatitions Society (SI)
- Members of the Swiss Group for Artificial Intelligence and
Cognitive Science (SGAICO)
For further information and registration apply to the SYMPOSIUM
SECRETARY Mrs. Manuela Mounir
WORKSHOP PROGRAMME
After the Symposium a two and a half day workshop will take place
at the Geneva University. The workshop is limited to 20 invited
attendees, whose research interests are in different aspects of
multiple agents, parallelism and learning. The goal of the
workshop is to discuss and elucidate different approaches and
their interrelations and to further conceptualise the present
problems and future promising research directions in
Connectionism. The workshop will be videotaped and later made
accessible to a wider audience.
Participants are:
Guenter Albers Genetic A.I. and Epistemics Lab. Geneva Uni.
Andre Boder MIT and Geneva University
Heiner Brand University of Bielefeld, Germany
Guy Cellerier Genetic A.I. and Epistemics Lab. Geneva Uni.
Stefano Cerri Mario Negri Institute, Milan
Jean-Jaques Ducret Genetic A.I. and Epistemics Lab. Geneva Uni.
Jerome Feldman University of Rochester, Rochester
Ken Haase Artificial Intelligence Lab., MIT
John Holland University of Michigan, Ann Arbor
Marvin Minsky Artificial Intelligence Lab., MIT
Rolf Pfeifer Institute for Informatics,Zuerich University
Mike Rosner ISSCO, Geneva University
Thomas Rothenfluh Conflict Research Center, Zuerich University
David Rumelhart University of California, San Diego
Terrence Sejnowski John Hopkins University, Baltimore
Zoltan Schreter Genetic A.I. and Epistemics Lab. Geneva Uni.
Luc Steels A.I. Lab, Free University, Brussels
John Sutton GTE Labs, Walton, USA
David Waltz Intelligent Thinking Machines, Cambridge USA
ORGANISATION: Guenter Albers
GENETIC ARTIFICIAL INTELLIGENCE AND EPISTEMICS LABORATORY
University of Geneva, Switzerland
TEL.: (0041) 22 20 93 33 EXT.2623 (Switzerland)
REGISTRATION and SYMPOSIUM SECRETARY: Mrs. Manuela Mounir
FACULTY OF PSYCHOLOGY AND EDUCATION SCIENCE, UNIVERSITY OF GENEVA
CH-1211 Geneva 4, Switzerland
TEL.: (0041) 22 20 93 33 EXT.2657 (Switzerland)
Telex: 423801 UNI CH Geneve
For further (non-organisation-related) information send mail to
Guenter Albers or reply by email to Daniel Schneider:
to VMS/BITNET: to UNIX/EAN: (preferable)
BITNET: SCHNEIDER@CGEUGE51 shneider%cui.unige.chunet@CERNVAX
ARPA: SCHNEIDER%CGEUGE51.BITNET@WISCVM shneider%cui.unige.chunet@ubc.csnet
uucp: mcvax!cernvax!cui!shneider
X.400/ean: shneider@cui.unige.chunet
------------------------------
End of AIList Digest
********************
∂17-Jun-86 1821 LAWS@SRI-AI.ARPA AIList Digest V4 #151
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Jun 86 18:20:55 PDT
Date: Tue 17 Jun 1986 13:12-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #151
To: AIList@SRI-AI
AIList Digest Wednesday, 18 Jun 1986 Volume 4 : Issue 151
Today's Topics:
Queries - References on Natural Language & Aristotle &
Prolog Optimization & P-Shell & Knowledge Acquisition &
Expert Systems for Clinical Neuropsychological Assessment &
Expert System Validation and Verification &
Expert-Ease & Recursive Fixed-Point Solvers &
Cheeseman's Automatic Expert System Induction,
Psychology & Physics - Inside Out & Dr. Who
----------------------------------------------------------------------
Date: 2 Jun 86 16:56:00 PST
From: seismo!nwc-143b.ARPA!sefai
Subject: References on Natural Language???
[Forwarded from IRList Digest V2#26 by Laws@SRI-AI.]
I am investigating literature that will hopefully help me on my
master's thesis. Without being too specific, the topic centers around
schemes for representing natural language in a computer system. So far,
my list of references includes:
1. Handbook of Artificial Intelligence, Barr and Feigenbaum
2. NETL: A System for Representing and Using Real-World
Knowledge, Fahlman
3. Human Information Processing, Lindsay and Norman
4. A Theory of Syntactic Recognition for Natural Language,
Marcus
5. Principles of Artificial Intelligence, Nilsson
6. Basic English (series), Ogden
7. The Cognitive Computer on Language, Schank with Childers
8. Computer Models of Thought and Language, Schank and Colby
9. Artificial Intelligence, Winston
10. A Handbook of English Grammar, Zandvoort
I'd appreciate any good references others have come across and
I'd be more than happy to send out the list afterwards.
Gene Guglielmo
sefai@nwc-143b
[Note: Thank you for the offer of collecting references. You have
quite an unusual assortment of works! I encourage you to look at
"Introduction to Modern Information Retrieval" by Salton and McGill
and "Information Retrieval, 2nd ed." by C.J. VanRijsbergen for a
rather different perspective. Let us know more details of your plans
when you become more focused. - Ed]
------------------------------
Date: Sat 7 Jun 86 23:38:21-PDT
From: Ali Ozer <ALI@SU-SCORE.ARPA>
Reply-to: ali@score,taran@sushi
Subject: Curious about Aristotle, "Knowledge Processor"...
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
In p.19 of June 4 Campus Report, there is a short 2-column article
titled "Knowledge processor named Aristotle pays a visit." The article
says... "Modeling computer architecture after the human nervous system,
a Stanford graduate has developed Aristotle, a unique knowledge
processor. ... Modeled on synapses, the junctions between nerve cells,
Aristotle encodes information in fundamental units ranging from a
single character to a word, then a sentence, and finally a paragraph. ...
``You teach Aristotle like a child,'' he [John Voevodsky, the inventor]
said. ``Characters first, then words and sentences.'' ... Aristotle can
perform several tasks. It was first trained to turn a light on and
off, then to ring a bell, and finally to blow a whistle. ... "
Anyway, if you're curious from the above, you should get your hands on
a Campus Report and read the whole article. This machine just
sounds fascinating, but there isn't any technical information about it
in the paper. Does anyone out there know more about this? The
article makes it sound like this processor provides an approach to
intelligence that could easily replace most of the current AI techniques!
But, I don't know much about AI, and I certainly know very little about
this Aristotle, so I just don't know... If anyone has more info or
knows where there is more written about this "knowledge processor,"
I would like to hear about it.
Very curious about things I should not be curious about during
finals week, but am,
-Ali
------------------------------
Date: 13 Jun 86 04:25:15 GMT
From: sdcsvax!sdcrdcf!burdvax!psuvax1!gondor!hou@ucbvax.berkeley.edu (Po Hou)
Subject: A.I.(expert systems)
I am studying application of prolog on expert systems.
Is the following fact correct ?
(1) when a predicate is used recently then it will be used in the future
with higher possibility than those predicates that are not used recently.
(i.e. it is similar to working set concept of virtual memory.)
For example,
predicate call p(a,Y,Z) gets a set {(X,Y,Z)| X=a } , then what is the
possibility that p(a,Y,Z) is called again ?
(2) what is the user behavior to use a expert system ?
(3) frequently used knowledge will be used with a higher possibility ?
------------------------------
Date: 13 Jun 86 22:49:29 GMT
From: sdcsvax!noscvax!kanemoto@ucbvax.berkeley.edu (Nelson T. Kanemoto)
Subject: P-Shell Query
I'm looking for information on P-Shell, described in the article:
"Programming in P-Shell", by Newton S. Lee, IEEE Expert,
pg. 50-63, in the recent Summer 1986 issue.
If anyone knows the cost, availability, or any other information concerning
P-Shell, please send me a message:
kanemoto@nosc.arpa
Thanks in advance,
Nelson T. Kanemoto
Computer Sciences Corporation
NOSC Hawaii
------------------------------
Date: Fri, 13 Jun 86 14:52:26-1000
From: Jimmy Y. Cheng <cheng%humu@nosc.ARPA>
Subject: Knowledge Acquisition
I'm interested in the knowledge acquisition of the domain
knowledge from an expert to an engineer. Can anyone help me in
locating an article or reference to people working in this area? Any
help would be greatly appreciated. Since this the bottleneck in
building an expert system, any progress would be a boon to AI.
------------------------------
Date: 15 Jun 86 17:31:48 GMT
From: ucbcad!nike!topaz!harvard!ut-sally!ut-ngp!gknight@ucbvax.berkeley.edu
(Gary Knight)
Subject: Expert systems for clinical neuropsychological assessment.
A few weeks ago I posted an inquiry concerning my interest in the current
state of research and development on expert systems for clinical
neuropsychological assessment. I received several replies, some of which led
to some very useful material.
I would now like to re-post that inquiry, seeking still further input
from anyone who has such information and did *not* respond before. So . . .
Does anyone have information they can share with me
on research or development work with respect to
expert systems for application to clinical neuro-
psychological assessment? If so, please reply by
mail and I'll post a summary, including all previous
replies.
Thanks very much.
--
Gary Knight, 3604 Pinnacle Road, Austin, TX 78746 (512/328-2480).
Biopsychology Program, Univ. of Texas at Austin. "There is nothing better
in life than to have a goal and be working toward it." -- Goethe.
------------------------------
Date: Mon, 16 Jun 86 15:58 ???
From: PENN%NGSTL1%ti-eg.csnet@CSNET-RELAY.ARPA
Subject: Expert System Validation and Verification
I am doing a literature search on the validation and
verification of expert systems. I have found a few
articles manually, however, my database searches
(INSPEC, COMPENDEX, etc.) haven't been helpful.
I am getting more on the use of expert systems to
test other computer software than procedures/
methods for validating expert systems!
If you have any pertinent information, or some
good sources with carry-over potential to
expert systems I would appreciate being
contacted. In return I will be happy to
furnish you with the final literature search
information. Thank you!
Mary Penn
Knowledge Engineer
TI-Artificial Intelligence Laboratory
(214) 343-7667
P.O. Box 660246 M/S 3645
Dallas, TX 75266
PENN%NGSTL1@TI-EG.CSNET
[One validation effort was carried out by John Reiter for the HYDRO
expert system (an extension of Prospector that he, Rene Reboh, and
John Gashnig developed). Reiter used scattergrams and rank correlations
to compare various actual parameters with those predicted by the
system. The final SRI report was "Development of a Knowledge-Based
Interface to a Hydrological Simulation Program," May 1982, but I
believe most of the validation effort was documented in John's
dissertation. -- KIL]
------------------------------
Date: 16 Jun 86 18:58:19 GMT
From: ihnp4!houxm!mtuxo!mtgzy!jis@ucbvax.berkeley.edu (j.mukerji)
Subject: Info wanted on Expert-Ease
I just read a glossy on Expert-Ease, which is based on an inference engine
developed by Donald Michie at Edinburgh University. I would appreciate any
comments about it (good or bad) from anyone who has used it. I am
considering buying it, and of course would like to know whether it is all
that it is touted to be. If there is sufficient interest I will summerize
responses to this message in this newsgroup.
Thank you.
Jishnu Mukerji
AT&T Information Systems
Middeltown NJ
ihnp4!mtgzz!jis1
------------------------------
Date: Mon, 16 Jun 86 11:31 EDT
From: DSTEVEN%clemson.csnet@CSNET-RELAY.ARPA
Subject: Recursive fixed point solvers.
We are looking for a program to solve for fixed points of
recursive equations. Actually, any help will be greatly
appreciated.
Thanks in advance
Steve
(803) 656-5880
------------------------------
Date: Mon 16 Jun 86 14:00:05-PDT
From: Tom Garvey <Garvey@SRI-AI.ARPA>
Subject: Re: Seminar - Automatic Expert System Induction (NASA Ames)
Does this mean that Cheeseman has at long last implemented something,
or is this going to be more of the same old theoretical maximum
entropy stuff over high-order probability distributions that would
not only eliminate the need for an expert but also make it impossible
for the expert to provide the necessary information. Presumably, an
expert system with no experts is misnamed, and systems for statistical
analysis have been around for a long time.
Cheers,
Tom
------------------------------
Date: Sat, 7 Jun 86 19:00:43 bst
From: gcj%qmc-ori.uucp@Cs.Ucl.AC.UK
Subject: Re: Inside Out
>From: majka@ubc.CSNET.UUCP
>> ...Einstein's theory of general relativity, which models the cosmos
>> as a 4 dimensional pseudo-Riemannian spacetime. ...
>
>*pseudo*-Riemannian? I think you mean Semi-Reimannian, and that applies
>to the metric, not the spacetime.
>
>---
>Marc Majka
>
OK, take your pick, but it must be a pseudo/semi-Riemannian spacetime, so
that you can have null distances; ie the metric on the manifold must have
differing signs eg (+1,-1,-1,-1), ie in Minkowski space. (Note that in GR,
all spacetimes are locally Minkowski). The manifold must be Hausdorff and
differentiable to arbitrary order, ie C-infinity.
Apart from differential geometry, the keyword is *model*. Spacetime is not
any type of Riemannian manifold. Newton did not need differential geometry
to form a model of gravity. Did spacetime suddenly curve when Einstein
discovered general relativity? What is interesting is the leap from the
intuitive idea of the apple falling because it is `pulled' by the earth,
to the non-intuitive idea of the apple falling because nothing holds it up.
It falls along a timelike geodesic, the shortest (4-dim) distance between
two points.
And there is nothing intuitive about quantum chromo-dynamics, at least not
to me.
Gordon Joly,
ARPA: gcj%maths.qmc.ac.uk%cs.qmc.ac.uk@cs.ucl.ac.uk
UUCP: ...!seismo!ukc!qmc-ori!gcj
------------------------------
Date: Fri, 6 Jun 86 14:10 EST
From: STANKULI%cs.umass.edu@CSNET-RELAY.ARPA
Subject: more inside and out
Another response to phayes AILIST vol 4 # 125 and subsequent replies on the
intuitions of tardis inside and out... particularly Ken Laws reply. i will
include references to relevant episodes which test the limits of tardis
functions.
the tardis is not a portal to another dimension. Gallifreyan temporal
mechanics are particularly limited to our 4-dimensional universe (they call it
N-space) in its operation. theoretically the time lords can go to any time and
place in N-space but they accept informal constraints they call "time laws"
which they try to enforce to lower the occurrence of paradox phenomema. but
even the high council of time lords will violate these regulations once in a
while at great expense of energy ('the five doctors' peter davidson). the
fundamental piece of Gallifreyan technology is called a dimensional stabalizer
which was discovered by Omeger and perfected by an engineer called Rasilon.
the metauniverse of dr. who is at least five dimensional. there have been
times when the doctor's tardis has been transported through accident into other
parallel 4D universes where it functions with different precision than in
N-space. the doctor (jon pertwee) had this happen once when repairing the
tardis console and later the tardis was thrown into E-space by a stellar
accident for a number of episodes (tom baker). E-space was a much smaller 4D
universe which was collapsing instead of expanding.
punching holes in the side of a tardis has happened. in 'terminus' (peter
davidson) the tardis was breaking apart in transit and attached to the side of
a space vehicle. the doctor and companions came and went from the tardis
through an unstable hole in the wall of nyssa's room. the hole acted just like
a door, but they could not control its opening and closing.
there is no fundamental reason why the inside of a tardis is always larger
than the outside. the relative dimensions of inside and out are uncoupled.
the 'outer plasmic shell' is controlled by a chameleon circuit and can be any
size. the outside could be larger than the inside. tom baker once designed an
exterior the size of the pyramid of cheops but since his chameleon circuit was
broken, it reverted to the police box. the master once had his tardis
materialize around a Concorde SST ('time flight' peter davidson). there is no
reason why a tardis outside could not be the size of a shoe box or postage
stamp, except that a humanoid could not exit the craft in such case. an error
in 'logopolis' (tom baker) caused it to become three feet high, trapping the
doctor inside. a tardis can also jettison portions of its interior space in
emergency ('castrovalva' peter davidson).
some other interesting properties have arisen in the 20+ year series. if a
tardis is turned over on its side, there is a control which can rotate the
interior so the floor orients with gravity ('time flight'). when a tardis
materializes, it incorporates the space it appears in. the master's tardis
contained the original SST inside his own. a dimensional anomaly arises when
one tardis materializes around another tardis ('logopolis'). the dimensional
stabilizer works by folding one dimension into another-- apparently a
point-for-point mapping mechanism. they call this 'block transfer
computation'. if one tardis incorporates another one, they are both in danger
of losing external reference. since they both contain the same folded space,
they both contain each other. it is possible to walk from the outer one through
the inner one to the outer one... like infinite regression in a hall of
mirrors.
for one of the longest running dramatic series in history, the BBC staff of
writers is to be admired for their conceptual detail in metauniversal design.
their spacetime mechanics have interesting and plausible ramifications on a
different order of magnitude than purely child fantasy like alice through the
looking glass. the limitations of temporal technology, genetic regeneration,
metalinguistic translation, and even the sonic screwdriver make the series
intriguing beyond the fun of watching.
stan
------------------------------
End of AIList Digest
********************
∂17-Jun-86 2129 LAWS@SRI-AI.ARPA AIList Digest V4 #152
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 17 Jun 86 21:29:40 PDT
Date: Tue 17 Jun 1986 13:25-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #152
To: AIList@SRI-AI
AIList Digest Wednesday, 18 Jun 1986 Volume 4 : Issue 152
Today's Topics:
Policy - AIList Distribution Mechanisms & LISP Messages,
Techniques - Lisp and Lazy Evaluation,
AI Tools - AI Software for MS-DOS
----------------------------------------------------------------------
Date: 7 Jun 86 20:47:02 GMT
From: cad!nike!caip!seismo!rochester!altman@ucbvax.berkeley.edu
Subject: AIList Distribution Mechanisms
From: Art Altman <altman>
I read reference to "ailist vol xxx" in mod.ai,
but I do not see this ailist appearing in either net.ai or mod.ai.
Anyone know - to what network is "ailist" posted?
Is it sent to individuals and should I get on some list to receive it?
Thanks,
Art "altman@rochester"
[The distribution currently includes many channels: direct mail
(to Arpanet and other networks), exploded digests sent to certain
bboard systems, and a hybrid of UUCP mod.ai and net.ai. The first
two are in digest form, with volume numbers that let readers track
whether issues have been missed or refer to issues by number. There
is also a Today's Topics section that previews the digest contents
to aid skimming and later text searches. The UUCP distribution
lacks these niceties and some of the editing and sorting that I
provide as moderator, but offer real-time interchanges. It works
as follows.
Net.ai is forwarded to my mailbox. I pull out any messages that look
pertinent and nontrivial and add them to direct submissions in the
AIList mailbox. I select a number of messages and form them into
a digest to be sent to the Arpanet readers. Then I delete the net.ai
messages from the originals and send the direct submissions in
undigested form to mod.ai. The overall effect is that people reading
net.ai plus mod.ai get everything in the digest plus any part of the
net.ai discussion that I ignore. -- KIL]
------------------------------
Date: Wed, 11 Jun 86 21:02:58 edt
From: Jay Weber <jay@rochester.arpa>
Reply-to: jay@rochester.UUCP (Jay Weber)
Subject: Re: Common LISP style standards
I admit that there is a significant overlap between the people
interested in Artificial Intelligence and those interested in the
LISP programming language, but it should be obvious that articles
like "Common LISP style standards" and "LISP for IBM PCs" should
be posted to newgroups other than mod.ai, and such newsgroups do
exist. This newsgroup has a large amount of traffic, and I expect
that many readers have unsubscribed due to the large amount of
inappropriate submissions.
I would mail this message to individuals who do not realize this,
but there have been so many it would not be effective. Mostly
this message is to the moderators, who should be enforcing the
focus of the newsgroup.
Jay Weber
Department of Computer Science
University of Rochester
jay@rochester.arpa
[Unfortunately there are few relevant discussion lists on
the Arpanet side of the gateway. We do have one on workstations
and others on particular micros or Lisps, but nothing of the
required generality. I will be glad to help anyone who wants
to start a list devoted to Lisp or any other topic currently
covered by AIList:
Expert Systems AI Techniques
Knowledge Representation Knowledge Acquisition
Problem Solving Hierarchical Inference
Machine Learning Pattern Recognition
Analogical Reasoning Data Analysis
Cognitive Psychology Human Perception
Natural Language Computational Linguistics
AI Languages and Systems Machine Translation
Theorem Proving Decision Theory
Logic Programming Computer Science
Automatic Programming Information Science
AI & Society Sociology of AI
AI & Business AI Workstations
(Step forward, folks, or I may burn out soon. Besides, its lots
of fun and it puts you in contact with the best people.) -- KIL]
------------------------------
Date: 06-Jun-1986 1604
From: kevin%logic.DEC@decwrl.DEC.COM (Kevin LaRue -- The Earth makes
one resolution every 24 hours.)
Subject: Re: Lisp & lazy evaluation
The bibliographies contained in the two books
Henderson, Peter,
``Functional Programming: Application and Implementation,''
Prentice-Hall International,
London,
1980.
and
Darlington, J., Peter Henderson and David A. Turner, editors,
``Functional Programming and its Applications: an Advanced Course,''
Cambridge University Press,
Cambridge,
1982.
point to the following historical references:
Burge, W. H.,
``Recursive Programming Techniques,''
Addison-Wesley,
Reading, Massachusetts,
1975.
Friedman, D. P., and D. S. Wise,
`CONS Should Not Evaluate its Arguments,'
in ``Automata, Languages and Programming,''
S. Michaelson and R. Milner, editors,
Edinburgh University Press,
Edinburgh,
1976
Henderson, Peter and J. M. Morris,
`A Lazy Evaluator,'
in ``Proceedings of the 3rd POPL Symposium,''
Atlanta, Georgia,
1976.
Kahn, G., and D. McQueen,
`Coroutines and Networks of Parallel Processors,'
in ``Information Processing 77''
North-Holland,
Amsterdam,
1977.
Landin, P. J.,
`A Correspondence between Algol 60 and Church's Lambda Calculus,'
in ``Communications of the ACM''
Volume 8, number 3,
pages 158-165,
1965.
Vuillemin, J. E.,
``Proof Techniques for Recursive Programs,''
Memo AIM-318, STAN-CS-73-393,
Stanford University,
1973.
You may also want to ask David Turner about his experiences with his
``Miranda'' functional programming environment. Indeed, he is
distributing it, if you would like to play with it yourself. His
electronic address is:
dat%ukc@ucl-cs
(He's currently at the University of Kent at Canterbury.)
------------------------------
Date: 9 Jun 86 02:08:13 GMT
From: ihnp4!lzaz!psc@ucbvax.berkeley.edu (Paul S. R. Chisholm)
Subject: AI software for MS-DOS (long)
< cross posted to effected groups; please followup only to net.micro.pc >
Here's the third and (I hope) last list of artificial intelligence
software for MS-DOS based machines. I started with expert system
shells, then picked up Prolog processors, and Lisp and other languages
found their way in. "Decision support" tools are presumably decision
tree managers; for their relation to expert systems, see the hot and
heavy discussion in net.ai and mod.ai (or actually, the summary I've
posted to those groups).
Thanks to Lou Fried (FRIED@SRI-KL.ARPA) and Dallas Webster
(CMP.BARC@R20.UTexas.Edu or ut-sally!batman!dallas) for additions to
this list.
The names, addresses, phone numbers, and especially prices are not
guaranteed to be free from typos, line noise, or obsolescence. I have no
experience or further information on any of these packages; don't call
me, call the company. On the other hand, if *you* have used any of
these systems, please drop me a line; I'll be happy to summarize and
repost. I'd also like to hear of any products I'd forgotten, or any
errata to my list.
-Paul S. R. Chisholm, UUCP {ihnp4,cbosgd,pegasus,mtgzz}!lznv!psc
AT&T Mail !psrchisholm, Internet mtgzz!lznv!psc@topaz.rutgers.edu
--
Aion Development System: expert system shell, $7000
Aion Corp.
101 University Ave., 4th floor
Palo Alto, CA 94301
415-328-9595
The Decision Maker: decision support, $250
Alamo Learning Systems
Suite 500, 1850 Mt. Diablo Blvd.
Walnut Creek, CA 94596
415-930-8521
Arity Expert System Development Package: expert system shell, $295
Arity Standard Prolog: AI language (Prolog), $95
Arity Prolog Interpreter V4: AI language (Prolog), $350
Arity Prolog Compiler & Interpreter V4: AI language (Prolog), $795
Arity Corp
358 Baker Ave.
Concord, MA 01742
617-371-1243
Prdigy: expert system shell, $450
OPS5+: expert system shell, $3000
Artelligence, Inc.
14902 Preston Rd., suite 212-252
Dallas, TX 75240
214-437-0361
A.D.A Educational Prolog: AI language (Prolog), $29.95
VML Prolog: AI language (Prolog), $300
Automata Design Associates
1570 Arran Way
Dresher, PA 19025
215-646-4894
Micro In-Ate: expert system shell for fault diagnosis, $5000
Automated Reasoning Corporation
290 West 12th St., Suite 1D
New York, NY 10014
212-206-6331
Turbo Prolog: AI language (Prolog), $99.95
Borland International
4585 Scotts Valley Dr.
Scotts Valley, CA 95066
408-438-8400
SpinPro: ultracentrifugation experiment expert system [GCLISP], $2500
(note: a specific expert system, *not* a shell!)
Beckman Instruments, Inc.
Spinco Division
415-857-1150 (sales info); (714)-961-3728 (technical info) Matt Heffron
Xsys: expert system shell, $995
California Intelligence
912 Powell St. #8
San Fransisco, CA 94108
415-391-4846
Prolog V: AI language (Prolog), $69.95/$99.95
Chalcedony Software, Inc.
5580 La Jolla Blvd, Suite 126B
La Jolla, CA 92037
617-483-8513
Expert Choice: decision support, $495
Decision Support Software Inc.
1300 Vincent Place
McLean, VA 22101
703-442-7900
Methods: AI language (Smalltalk), $250
Digitalk, Inc.
5200 W. Century Blvd.
Los Angeles, CA 90045
213-645-1082
TOPSCI: expert system shell, $75/$175
Dynamic Master Systems Inc.
PO Box 566456
Atlanta, GA 30356
404-565-0771
Decision Analyst: decision support, $139
Executive Software, Inc.
Bay St.
Shanty Bay, Ontario, CANADA LOL 2LO
705-722-3373
The Idea Generator: decision support, $195
Experience in Software
2039 Sattuck Ave., Suite 401
Berkeley, CA 94704
415-644-0694
ES/P Advisor: expert system shell, $895
Prolog-1: AI language (Prolog), $395
Prolog-2 Interpreter and Compiler: AI Language, $1895
Expert Systems International
1150 First Ave.
King of Prussia, PA 19406
215-337-2300
Xi: expert system shell, $795
Expertech
Expertech House, 172 Bath Rd.
Slough, Berks SLI 3XE, ENGLAND
0753-821321
Portable Software Inc.
650 Bair Island Rd., Suite 204
Redwood City, CA 94063
415-367-6264
(and somebody near Boston at 617-470-2267)
Exsys 3.0: expert system shell, $395
(demo disk for $10?)
Exsys Inc.
PO Box 75158, Contract Sta. 14
Albuquerque, NM 87194
505-836-6676
GEN-X: Expert system shell
General Electric Research and Development Center
Schenectady, NY 12345
TIMM-PC: expert system shell, $9500
General Research
7655 Old Spring House Rd.
McLean, VA 22102
703-893-5900
GCLisp (Golden Common Lisp): AI language (Lisp), $495
286 Developer: AI Language (Lisp), $1195
(expert system shell to be announced in late 1986)
(K-base was a specialized proprietary package, now dead)
Gold Hill Computers
163 Havard St.
Cambridge, MA 02139
617-492-2071
Expert Ease: expert system shell, $695
(example based, forward chaining)
Expert Edge: expert system shell, $795
(rule based, backward chaining, uncertainty, math)
(they also sell 1st Class for $495, same as Programs in Motion)
Human Edge Software
2445 Faber Pl.
Palo Alto, CA 94303
CA: 800-824-7325, elsewhere: 800-624-5227
AL/X: Expert system shell
ALCS: Expert system shell
Inference Manager: expert system shell, 500 pounds
Intelligent Terminals Ltd or George House
15 Canal St. 36 North Hanover St.
Oxford, UK OX26BH Glasgow, Scotland G1 2AD
041-522-1353
(Try Jeffrey Perrone & Associates, 415-431-9562)
Knowol: expert system shell, $39.95/$99.95?
Intelligent Machines Co.
3813 N. 14th St.
Arlington, VA 22201
703-528-9136
KEE: expert system shell
IntelliCorp
1975 El Camino Real W.
Mountain View, CA 94040
415-965-5500
Experteach: expert system shell, $475
Intelliware, Inc.
4676 Admiralty Way, Suite 401
Marina del Rey, CA 90291
213-305-9391
IQLisp: AI language (Lisp), $175
Integral Quality
6265 Twentieth Avenue (or POB 31970)
Seattle, WA 98115
206-527-2918
Savior: expert system shell, 3000 pounds
ISI Limited
11 Oakdene Road
Redhill, Surrey, UK RH16BT
(0737)71327
Ex-Tran: expert system shell, $3000
Jeffrey Perrone & Associates
415-431-9562
KDS: expert system shell, $795 (development), $150 (playback)
KDS II: expert system shell, $945
KDS Corp.
934 Hunter Rd.
Wilmette, IL 60091
312-251-2621
Decision Aide: decision support, $250
Trouble Shooter: decision support, $250
Kepner-Tregoe, Inc.
PO Box 704
Princeton, NJ 08542
609-921-2806
Insight: expert system shell, $95
Insight2: expert system shell, $485
Level 5 Research
4980 S. Highway A1-A
Melbourne Beach, FL 32751
(moved to 503 Fifth Ave., Suite 201, Indiatlantic, FL 32903?)
305-729-9046
Byso Lisp: AI language (Lisp), $125
Levien Instrument Co.
Sittlington Hill
PO Box 31
McDowell, VA 24458
703-396-3345
Lightyear: decision support, $495
Lightyear, Inc.
1333 Lawrence Expwy., Bldg. 210
Santa Clara, CA 95051
408-985-8811
(may be obsolete; see Thoughtware Inc.)
Daisy: expert system shell
Lithp Systems BV
Meervalweg 72
1121 JP Landsmeer
The Netherlands
Micro-Prolog: AI language (Prolog), $395
Logic Programming Associates
31 Crescent Drive
Milford, CT 06460
203-872-7988
MProlog: AI language (Prolog), $725
Logicware, Inc.
5000 Birch St., West Tower, suite 3000
Newport Beach, CA 92660
416-665-0022
70 Walnut St.
Wellesley, MA 02181
617-237-2254?)
Reveal: expert system shell, $4500 ($2000?)
McDonnell Douglas
Knowledge Engineering Products Division
20705 Valley Green Dr.
Cupertino, CA 95014
408-446-7406
MicroExpert: expert system shell, $49.95
McGraw-Hill
PO Box 400
Hightstown, NJ 08520
or 1221 Avenue of the Americas
New York, NY 10020
NY: 212-512-2999, elsewhere 800-628-0004
Guru: integrated software with expert system shell, $3000
Micro Data Base Systems
PO Box 248
Lafayette, IN 47902
317-463-2581
muLisp-85: AI language (Lisp), $250
Microsoft Corp.
10700 Northup Way, Box 97200
Bellevue, WA 98004
206-828-8080
Expert-2: expert system shell, $70
(requires MMSFORTH v2.4, $180)
Miller Microcomputer Services
61 Lakeshore Rd.
Natick, MA 01760
317-653-6136
QTime: expert system shell, $695
MOM Corp.
Two Northside 75
Atlanta, GA 30318
404-351-2902
Expert: expert system shell, $100
(same as MMS Expert-2 above? requires Forth?!)
Mountain View Press
PO Box 4656
Mountain View, CA 94040
415-961-4103
LISP/88: AI language (Lisp), $50
Norell Data Systems
PO Box 70127
3400 Wilshire Blvd
Los Angeles, CA 90010
213-748-5978
UO-Lisp: AI language (Lisp), $150
Northwest Computer Algorithms
PO Box 90995
Long Beach, CA 90809
213-426-1893
ERS: expert system shell
PAR Technology Corp.
220 Seneca Turnpike
New Hartford, NY 13413
XLISP: AI language (object oriented Lisp), $6 (disk 148)
Expert System of Steel: expert system shell, $6 (disk 268)
Esie: expert system shell, $6 (disk 398)
ADA Public Domain Prolog: AI language (Prolog), $6 (disk 405)
(see also Automata Design Associates)
PC-SIG
1030 E. Duane Ave, Suite J
Sunnyvale, CA 94086
408-730-9291; CA 800-235-6647, elsewhere 800-235-6646
(or where ever you get fine public domain software)
Waltz Lisp, $169
ProCode International
15930 SW Colony Place
Portland, OR 97224
503-684-3000
OPS83: expert system shell
Production Systems Technologies, Inc.
642 Gettysburg St.
Pittsburgh, PA 15206
412-362-3117
Micro-Prolog Professional: AI language?, $395
apes: expert system shell [micro-Prolog], $250
Programming Logic Systems
312 Crescent Dr.
Milford, CT 06460
203-877-7988
1st-Class: expert system shell, $20/$495 ($250??)
Programs in Motion, Inc.
10 Sycamore Rd.
Wayland, MA 01778
617-653-5093
Rulemaster/PC: expert system shell, $995
Radian Corp.
8501 Mo-Pac Blvd.
PO Box 9948
Austin, TX 78766
512-454-4797
Small-X: expert system shell, $125/$225
RK Software
PO Box 2085
West Chester, PA 19380
215-436-4570
Knowledge Engineering System II: expert system shell, $4000
Software Architecture & Engineering
1500 Wilson Blvd., suite 800
Arlington, VA 22209
703-276-7910
Wizdom: expert system shell, $1250/$2050
Software Intelligence Lab
1593 Locust Ave.
Bohemia, NY 11716
212-747-9066/516-589-1676
LISP/80: AI language (Lisp), $40
Software Toolworks
15233 Ventura Blvd., Suite 1118
Sherman Oaks, CA 91403
818-986-4885
Xper: expert system shell, $95
Softway
415-397-4666
TransLISP: AI language (Lisp), $75
Prolog-86: AI language (Prolog), $95/$250
Solution Systems
335-P Washington St.
Norwell, MA 02061
617-659-1571/800-821-2492
SeRIES-PC: AI language (Lisp), $5000
SeRIes PC: Expert system shell, $15000
SRI International
Advanced Computer Systems Division
333 Ravenswood Avenue
Menlo Park, CA 94025
415-859-2859; contact Bob Wohlsen, x4408
Q'NIAL: AI language (Nested Interactive Array Language), $395/$995
Starwood Corporation
PO Box 160849
San Antonio, TX 78280
512-496-8037
Microdyn: expert system shell, $300
Stochos
518-372-5426
M.1A: expert system shell, $2000
M1: expert system shell, $5000
KS-300: expert system shell
Teknowledge Inc.
525 University Ave., #200
Palo Alto, CA 94301
415-327-6640
Arborist: decision support, $595
PC Scheme: AI language (Lisp), $95
Personal Consultant: expert system shell, $950
Personal Consultant Plus: expert system shell, $2950
Texas Instruments
PO Box 80963, H-809
Dallas, TX 75380-9063
800-527-3500
Class
Texpert Systems, Inc.
12607 Aste
Houston, TX 77065
713-469-4068
TLC-Lisp: AI language (Lisp), $250
The Lisp Co.
PO Box 487
Redwood Estates, CA 95044
408-426-9400
Lightyear: decision support, $495
The Management Advantage: decision support, $249
Trigger: decision support, $495
Thoughtware, Inc.
Suite 1000a, 2699 S. Bayshore Dr.
Coconut Grove, FL 33133
305-854-2318
PSL: AI language (Portable Standard Lisp), distribution costs ($75?)
The Utah Symbolic Computation Group
Department of Computer Science
University of Utah
Salt Lake City, UT 84112
--
-Paul S. R. Chisholm, UUCP {ihnp4,cbosgd,pegasus,mtgzz}!lznv!psc
AT&T Mail !psrchisholm, Internet mtgzz!lznv!psc@topaz.rutgers.edu
The above opinions may not be shared by any telecomm company.
------------------------------
End of AIList Digest
********************
∂18-Jun-86 0006 LAWS@SRI-AI.ARPA AIList Digest V4 #153
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 18 Jun 86 00:03:17 PDT
Date: Tue 17 Jun 1986 20:53-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #153
To: AIList@SRI-AI
AIList Digest Wednesday, 18 Jun 1986 Volume 4 : Issue 153
Today's Topics:
Literature - AI and Organic Chemistry,
AI Tools - Common Lisp on Silicon Graphics,
Expert Systems - Conditional Independence References,
Algorithms - Traveling Salesman Problem,
Review - Spang Robinson Report Volume 2 No 6,
Philosophy - Creativity and Analogy
----------------------------------------------------------------------
Date: Mon 16 Jun 86 13:24:13-PDT
From: Matt Heffron <BEC.HEFFRON@USC-ECL.ARPA>
Subject: Re: AI & Organic Chemistry
A brand-new book from the American Chemical Society is:
Artificial Intelligence Applications in Chemistry,
Edited by: Thomas H. Pierce and Bruce A. Hohne
ACS Symposium Series #306, published by ACS. 1986
28 chapters in 5 sections: Expert Systems, Computer Algebra, Handling Molecular
Structures, Organic Synthesis, and Analytic Chemistry.
Each chapter is a paper given at an ACS Symposium last September.
-Matt Heffron
BEC.HEFFRON@USC-ECL.ARPA
------------------------------
Date: Sun, 15 Jun 86 13:26:31 PDT
From: Harry Weeks <franz!harry@kim.Berkeley.EDU>
Subject: Common Lisp implementations.
This note is in reply to a recent inquiry on this list for Common Lisp
implementations on Silicon Graphics systems.
Franz Inc. now supports our Extended Common Lisp, as well as Franz Lisp,
on Silicon Graphics workstations. Both products incorporate an inter-
face to the Iris graphic libraries. Extended Common Lisp is a complete
and robust implementation of the Common Lisp language as specified in
Guy Steele's book `Common Lisp: The Language.' We have added extensions
that include a Symbolics-compatible Flavors system, a foreign-function
interface, and extensive debugging tools. Franz Inc. also supports
Extended Common Lisp on workstations available from ATT, ISI, Masscomp,
Sun, and Tektronix. Inquiries are welcome and may be directed to our
offices at 1141 Harbor Bay Parkway, Alameda, California 94501, (415)
769-5656, ...!ucbvax!franz!info.
Harry Weeks
Franz Incorporated
------------------------------
Date: 12 Jun 86 19:38:00 GMT
From: pur-ee!uiucdcs!uicsl!bharat@ucbvax.berkeley.edu
Subject: Re: Conditional independence in possibility theory
I do not have the references you asked for. However if you are interested
these are some other references I found useful relating to conditional
independance and probabilities in EXPERT SYSTEMS.
1. Quinlan J.R.
Inferno : a cautious approach to uncertain inference.
The Computer journal, 26: 3, 255-269, 1983.
2. Allan P. White
Predictor : An alternative approach to uncertain inference in
Expert Systems.
Proc - IJCAI 1985, Vol.1, 328-330, 1985.
If you need them, please contact me at
bharat@a.cs.uiuc.arpa, or write a note to net.ai
Good luck
R.Bharat Rao
------------------------------
Date: Sat, 14 Jun 86 15:22:01 pdt
From: John B. Nagle <jbn@su-glacier.arpa>
Subject: Known solution to traveling salesman problem
There is a well-known and fast method for finding near-optimum
solutions to the traveling salesman problem. It was discovered at
Bell Labs in the 1960s, and it is as follows:
1. Connect up all N points in some arbitrary order,
resulting in a path with N-1 edges and two endpoints.
2. Pick two edges at random. Cut the path at these points.
This produces three paths, each with two endpoints.
3. There are six possible ways to connect the paths into
a single path. Try all six, and compute the total
distance for each arrangement. Keep the arrangement
with the shortest total length.
4. Iterate steps 2 and 3 until no improvement is observed
for a reasonable number of iterations, at least N
but less than N*N.
I strongly suspect that the neural nets people have just rediscovered
this classic algorithm, especially since the Business Week article
mentions that the neural net approach produces near-optimal, not
optimal, paths. Comparisons with the brute-force solution are
misleading.
John Nagle
[While the Hopfield-net solution may well be based on similar
mathematics, the flavor is quite different. It is more of a
parallel "relaxation" process or fuzzy linking, with each node
trying to link to neighbors in proportion to their nearness.
Hopfield describes this as an analog process that cuts through
the space of possibilities instead of moving around the outside
as the iterative solutions do. The net quickly approaches a
stable configuration of intersecting cliques (if that's not a
contradiction) separated by longer paths, then the cliques fight
it out to determine the final route. (The establishment of one
clique disrupts others, so a slow gradient search for the optimum
is necessary.) The lack of guaranteed optimality is primarily due
to the initial rapid convergence -- it is possible to construct
problems for which the true optimum is quite far from any broad
"potential well" that would attract the system. Some algorithms
use randomized "stochastic anealing" to get around this, others
start the process many times from very different initial conditions,
others just ignore the problem.
For an interesting study of one such problem, see the Spring 1985 issue
of Abacus. It presents a lengthy analysis of Lee Sallows' custom-built
hardware for solving pangram puzzles by full search, then a short article
by John Letaw showing how the same puzzles can (usually!) be solved by
approximation/optimization on a microcomputer running BASIC. -- KIL]
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Spang Robinson Report, Volume 2 No 6
Summary of Spang Robinson Report, Volume 2 NO. 6
June 1986
Emphasis on AI and Parallel Processing:
There are 28 companies marketing parallel hardware with 900 machines installed
for total revenues from 1985- mid 1986 of 160 million dollars.
Alliant Computer Systems is working with Stanford and Lucid, Inc. in a
DARPA funded project to develop a public domain LISP for parallel applications
called QLISP.
Control Data is working with the University of Georgia to develop a parallel
Prolog and after that a parallel Lisp.
Flexible Computer claims that 30 to 40 percent of its customers are interested
in AI.
Concurrent Lisp from Golden Common Lisp has been benchmarked on the Gabriel
Triangle Benchmark at 86 percent of the speed of a Xerox 1108 Dandelion
using one node of the IPSC hypercube. On a sixteen node hypercube, it
runs at 9.1 times the speed. INTEL says that 25% of 1000 queries were
oriented to AI.
LISP Machine announced that it intends to have its Object LISP running
on the INTEL hypercube by the end of May.
Sequent Computer says that 10 to 15 percent of its customers based
their decision buying decision on the availability of LISP, 50% were
interested in AI.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Japan Watch:
Arthur D. Little's Japan affiliate reported the results of a survey
of twenty Japanese companies. The US has over a five year lead in Japan
in AI but the gap will narrow with time. They predict the catchup will
be completed by 1992. There is a twelve to one differential in the US favor
in funds invested in AI up to 1985.
The Japanese AI market in 1985 was 80 million while the American
market was $412 million.
Kansai Electric Power has been developing a diagnostic expert system for use
with nuclear reactors with the prototype finished by March 1986. Kyushi
Electric Power Company is field testing an expert system system for diagnosis
and repair of electric power systems. Tokyo Electric Power Co., Inc.,
Hitachi Ltd and Mitsubishi Electric Corporation are working on expert systems
for supply and demand for power and for planning system operations.
Nippon Telephone and Telegraph will officially announce KBMS, an expert
systems tool. NTT is negotiating with other companies for collaboration
in the development of AI software.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
AI at IBM:
Dr. Herbert Schorr, Group Director for Products and Technology at IBM,
stated that IBM does not plan to release a dedicated LISP machine or AI
workstation. It considers its RT machine to be the IBM AI workstation.
He claims that benchmarking at Carnegie Mellon has done benchmarking of
this machine which shows it fares favorably with other AI languages
and hardware.
Most of IBM's efforts in developing expert systems are for internal
applications and it does not see the need to compete with those already
providing such products. There are 70 expert systems under development
at IBM with 24 more to be added.
------------------------------
Date: Fri, 13 Jun 86 13:44:40 bst
From: Gordon C Joly <gcj%qmc-ori.uucp@Cs.Ucl.AC.UK>
Subject: Re: Creativity and Analogy -- More Questions than Answers.
Uttam Mukhopadhyay asks, in AIList Vol 4 #148 :-
>Is there more to creativity than making interesting analogies? I am
>inclined to believe that making interesting analogies is at the heart
>of all intelligent activity that is described as creative.
Hmmm... A friend described another friend as a potentially good novelist,
because ``she always has a radically different view in the situation;
she always has a new angle''. But is there analogy tucked away in her
reasoning? And would we be able to elicit that knowledge from the
`expert'?
Finally, *is* creativity always intelligent, and in what sense of the
word -- AI, machine intelligence or human intelligence? As for analogy,
we always need hooks to hang ideas on, don't we?
Gordon Joly
INET: gcj%maths.qmc.ac.uk%cs.qmc.ac.uk@cs.ucl.ac.uk
EARN: gcj%MATHS.QMC.AC.UK%CS.QMC.AC.UK@AC.UK
UUCP: ...!seismo!ukc!qmc-ori!gcj
------------------------------
Date: Fri, 13 Jun 86 14:32:36 bst
From: Gordon C Joly <gcj%qmc-ori.uucp@Cs.Ucl.AC.UK>
Subject: Re: Creativity and Analogy -- Coda.
``To the extent that a professor of music at a conservatoire
can assist his students in becoming familiar with the patterns
of harmony and rhythm, and with how they combine, it must be
possible to assist students in becoming sensitive to patterns
of reasoning and how they combine. The analogy is not far-
fetched at all. -- Dijkstra.''
>From -- `Knowledge-Based Systems in Artificial Intelligence'
by Randall Davis and Douglas B. Lenat, McGraw-Hill, 1982, page 163.
Gordon Joly
INET: gcj%maths.qmc.ac.uk%cs.qmc.ac.uk@cs.ucl.ac.uk
EARN: gcj%MATHS.QMC.AC.UK%CS.QMC.AC.UK@AC.UK
UUCP: ...!seismo!ukc!qmc-ori!gcj
------------------------------
Date: Mon, 16 Jun 86 10:41:48 edt
From: Jay Weber <jay@rochester.arpa>
Reply-to: jay@rochester.UUCP (Jay Weber)
Subject: Re: Creativity and Analogy
> At a recent talk in Ann Arbor, Roger Schank observed/implied that
>a distinct characteristic of many creative people is the ability to
>analogize. My understanding of analogizing is to define transformations
>between two domains so that entities and relationships in one domain
>can be mapped into corresponding entities and relationships in the
>other domain. It appears that the greater the disparity in the "physics"
>of the two domains, the higher is the creative effort demanded.
> Not all transformations produce interesting results. Good analogies
>must be interesting from the perspective of the particular creative
>activity.
True. Every pair of "things" is analogous in *some* sense, i.e. there
exists a mapping between them. The utility of an analogy is how it
leads one to use those things more successfully.
> Is this model of creativity--making interesting analogies--valid
>across the spectrum of creative actvities, from the hard sciences
>(Physics, Chemistry, etc.) to the fine arts (painting, music)?
>Is there more to creativity than making interesting analogies? I am
>inclined to believe that making interesting analogies is at the heart
>of all intelligent activity that is described as creative.
I believe that one could give a reasonable definition of analogy that
encompasses all intelligent activity, or at least inductive learning
(which is a biggie as far as intelligence goes). I question, however,
how useful it is in AI to relate a slippery word like "analogy" to an
even slipperier word like "creativity". A formal approach with those
two terms will satisfy very few people, and an informal approach will
only give us an inflated opinion of the value of our own research,
which is largely why people make such comparisons.
Jay Weber
Department of Computer Science
University of Rochester
jay@rochester.arpa
------------------------------
Date: Mon, 16 Jun 86 16:56:06 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: Creativity and Analogy
This is a brief reply to U. Mukhopadhyay's article.
> [...]
> Is this model of creativity--making interesting analogies--valid
> across the spectrum of creative actvities, from the hard sciences
> (Physics, Chemistry, etc.) to the fine arts (painting, music)?
> Is there more to creativity than making interesting analogies? I am
> inclined to believe that making interesting analogies is at the heart
> of all intelligent activity that is described as creative.
"Creativity" is often idealized as the missing ingredient in computer
consciousness, but what exactly does it mean? In most of the examples
drawn from science, it means advantageously overriding the usual
categories and compartments, since categorizing and compartmentalizing
knowledge are characteristically scientific habits. Of course, making
analogies is one way to achieve this.
In art, creativity is much more straightforward! One creates a work
of art where there was none before. The essence of this kind of
creativity is to be able to perceive what ←is not.← This follows
in an essential way from the ability to perceive what one is taking
for granted, in order to stop taking it for granted.
A good reference is F. Perls et al., ←Gestalt Therapy.←
------------------------------
Date: Tue 17 Jun 86 12:33:35-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Artistic Creativity
From Col. Sicherman:
"In art, creativity is much more straightforward! One creates a
work of art where there was none before."
While there is truth to this, I disagree with the implication that
art, or certainly that >>all<< art, is pure creation. Most examples
that I have seen are transformations. The artist sees a scene,
technique, or concept that intrigues him, and searches for a way
to capture the same thing in a new medium. This is analogy in a
pure form, not the opposite of analogy.
-- Ken Laws
------------------------------
End of AIList Digest
********************
∂23-Jun-86 0128 LAWS@SRI-AI.ARPA AIList Digest V4 #154
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 23 Jun 86 01:27:52 PDT
Date: Sun 22 Jun 1986 23:15-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #154
To: AIList@SRI-AI
AIList Digest Monday, 23 Jun 1986 Volume 4 : Issue 154
Today's Topics:
Seminars - Motion Planning (GMR) &
Unifying Principles of Machine Learning (UPenn) &
Parallel Execution of Logic Programs (UTexas) &
Why Planning Isn't Rational (SRI) &
Symbolic Representation of Waveforms (CMU)
----------------------------------------------------------------------
Date: Mon, 16 Jun 86 15:26 EST
From: "Steven W. Holland" <HOLLAND%RCSMPA%gmr.com@CSNET-RELAY.ARPA>
Subject: Seminar - Motion Planning (GMR)
Seminar at General Motors Research Laboratories, Warren, Michigan (GMR):
Motion Planning: a Survey of the State of the Art
Joseph O'Rourke
Department of Computer Science
Johns Hopkins University
Monday, June 23, 1986
Abstract
Motion planning from the viewpoint of computational geometry is the problem
of moving an object (a robot hand, for example) from an initial to a final
position in the presence of fixed obstacles. A large number of algorithms
have been developed for various cases of this problem recently. I will
describe the two main paradigms for solving this problem, growing
algorithms and Voronoi diagram algorithms, and survey the known results.
Several special cases will be discussed, including moving a disk, moving a
ladder, moving through a door, and moving around a corner. I will also
touch on the more complex problem of moving through an environment which is
not itself fixed, for example, one that contains several independently
moving robots.
Joseph O'Rourke has been Assistant Professor at Johns Hopkins University
since receiving the Ph.D. degree in Computer Science at the University of
Pennsylvania in 1980. His dissertation research was in computer vision,
and he has published in pattern recognition, but now his research is
focused on computational geometry. O'Rourke is a NSF Presidential Young
Investigator.
-Steve Holland, Computer Science Department
------------------------------
Date: Wed, 18 Jun 86 11:08 EDT
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Unifying Principles of Machine Learning (UPenn)
CIS Colloquium
3 p.m. Thursday, June 19, 1986
216 Moore School, University of Pennsylvania
MACHINE LEARNING: UNIFYING PRINCIPLES AND RECENT PROGRESS
Ryszard S. Michalski
University of Illinois
Machine learning, a field concerned with developing computational theories of
learning and constructing learning machines, is now one of the most active
research areas in artificial intelligence. An inference-based theory of
learning will be presented that unifies the basic learning strategies. Special
attention will be given to inductive learning strategies, which include
learning from examples and learning from observations and discovery.
We will show that inductive learning can be reviewed as a goal-oriented and
resource-constrained inference process. This process draws upon the learner's
background knowledge, and involves a novel type of inference rules, called
inductive inference rules. In contrast with truth-preserving deductive rules,
inductive rules are falsity-preserving.
Several projects conducted at our AI Laboratory at Illinois will be briefly
reviewed, and illustrated by examples from implemented programs.
------------------------------
Date: 19 Jun 86 17:14:14 GMT
From: ucbcad!nike!lll-crg!seismo!ut-sally!leung@ucbvax.berkeley.edu
(Clement Leung)
Subject: Seminar - Parallel Execution of Logic Programs (UTexas)
AN ABSTRACT MACHINE BASED EXECUTION MODEL FOR COMPUTER ARCHITECTURE DESIGN
AND EFFICIENT IMPLEMENTATION OF LOGIC PROGRAMS IN PARALLEL
Manuel V. Hermenegildo
Dissertation Defense
The University of Texas at Austin
Department of Electrical and Computer Engineering
June 20, 1986 - 11:00 am - ENS431
Parallel execution represents one way in which the execution speed of logic
programs can be increased beyond the limits of conventional systems.
However, most proposed parallel logic programming systems lack the
optimizations and storage efficiency of high-performance sequential
implementations.
A parallel execution model for logic programs will be presented which is
based on extending to a parallel environment the techniques introduced by
the "Warren Abstract Machine", which have already made very fast and space
efficient sequential systems a reality. Therefore, the model is capable of
retaining sequential execution speed similar to that of current high
performance systems, while extracting additional gains by efficiently
supporting parallel execution. The model is described down to the Abstract
Machine level, specifying data areas, operation, and a suitable instruction
set. Several techniques are introduced which offer efficient solutions to
areas of parallel Logic Programming implementation previously considered
problematic or a source of considerable overhead, such as the specification
of control and management of the execution tree, the detection and handling
of variable binding conflicts in AND-Parallelism, support for "don't know"
non-determinism and treatment of distributed backtracking, goal scheduling
and memory management issues etc. These claims are supported by simulations.
------------------------------
Date: Thu 19 Jun 86 17:22:52-PDT
From: Amy Lansky <LANSKY@SRI-AI.ARPA>
Subject: Seminar - Why Planning Isn't Rational (SRI)
WHY PLANNING ISN'T RATIONAL
Terry Winograd (TW@SAIL)
Stanford University
(Computer Science, Linguistics, and CSLI)
11:00 AM, MONDAY, June 23
SRI International, Building E, Room EK242 (note room change)
Orthodox AI approaches to describing and achieving intelligent action
are based on a "rationalistic" tradition in which the focus is on a
process of deducing (using a representation of some kind) the
consequences of specific acts (operations) and searching for a sequence
of acts that will lead to a desired result (goal). This works
reasonably well for some limited domains, but falls far short of being a
general theory of intelligent action. It does not work well in the
small (how I operate my finger muscles, or where an amoeba slithers), or
in the large (how I conduct my life or where my research is headed).
Even in the cases of clearly explicit rational planning (e.g. planning a
bank robbery), the relation between plan and execution is not easy to
capture (what happens when the teller sneezes?).
In a recent book written jointly with Fernando Flores, I have proposed a
different basis for looking at action and cognition, focussing on the
"thrownness" of action without reflection, and on the open-endedness of
interpretation. Any alternative such as ours must address several
obvious questions:
Why is the naive view of rational decision-making and action so
intuitively plausible if it isn't right?
How can we account for the evolution of complex behavior which is
effective in an environment?
What implications does it have for AI and the design of computer
systems in general?
I will address these questions and related others, focussing on some
different issues from those raised in my talk to CSLI a couple of weeks
ago on "Why language isn't information".
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
------------------------------
Date: 18 Jun 86 18:59:25 EDT
From: Dave.McKeown@maps.cs.cmu.edu
Subject: Seminar - Symbolic Representation of Waveforms (CMU)
Monday June 23 1:30pm 5409 Wean Hall
A Symbolic Representation of Waveforms Using Multi-Resolution Analysis
Dr. Aviad Zlotnick
Department of Mathematics and Computer Science
Hebrew University, Israel
A multi-resolution technique for ``qualitative'' analysis of waveforms was
suggested by Witkin in 1983, and has since been studied extensively, both
in theory and in practice. In the first part of the talk we reconsider
Witkin's definition of qualitativity and outline a few weaknesses of his
method. In the second part we describe a representation based on an
alternative definition of qualitativity. We show that our method results
in waveform descriptions which are nearer to human intuition, are easier
to compute and can incorporate more domain knowledge. Furthermore, a
symbolic (verbal) description of waveforms derived from this representation
is shown to capture the waveforms' essential visual properties.
If you'd like to talk with Aviad while he is here on the 23rd, please
send mail to Dave McKeown@a.
------------------------------
End of AIList Digest
********************
∂23-Jun-86 0312 LAWS@SRI-AI.ARPA AIList Digest V4 #155
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 23 Jun 86 03:12:18 PDT
Date: Sun 22 Jun 1986 23:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #155
To: AIList@SRI-AI
AIList Digest Monday, 23 Jun 1986 Volume 4 : Issue 155
Today's Topics:
Queries - AI Tools Survey & Financial Expert Systems Survey &
Recognition Software & HYDRO & ES Shell & Stereo Vision,
Expert Systems - Validation and Verification,
Resources - Common Lisp Discussion List
Philosophy - Metaphilosophy and Computer Ethics,
Psychology - Doing AI Backwards
----------------------------------------------------------------------
Date: Tue, 17 Jun 86 19:23:19 PDT
From: heher%ford-scf1.arpa@ford-scf1.arpa (Dennis Heher)
Subject: Request for AI Tools Survey
I heard that there is an unclassified report
available that compares all of the commercial
AI tools (KEE, Knowledge Craft, ART, etc.).
This report was supposed to have been generated
at/for Wright-Patterson Air Force Base.
Does anyone have any information (title, report
number, where I can obtain a copy) on such a
report?
Thanks,
Dennis Heher
heher@ford-scf1.arpa
Ford Aerospace & communications Corporation
1260 Crossman Avenue
Sunnyvale, California 94089
(408) 743-3944
------------------------------
Date: 16 Jun 86 16:30:00 GMT
From: pur-ee!uiucdcs!convex!ti-csl!dbdavis@ucbvax.berkeley.edu
Subject: Financial Expert systems survey
I'm looking for a list of systems/software houses that are active
in the development and/or marketing of financial expert systems.
I'd also be curious to know what ( if anything ) the major insurance
companies are up to in terms of in-house development of expert
systems - which companies, and what applications ( risk assessment,
etc. ).
Any help is greatly appreciated. The info will be used as part of a
market survey I'm doing for a class.
--db davis
------------------------------
Date: 19 Jun 86 15:18:21 GMT
From: ihnp4!iwvae!gph@ucbvax.berkeley.edu (haberl)
Subject: AI routines
This is my first time posting to the net. I am doing some research on
Artifical Intellegence processes (Voice Recognition, Text Recognition and
Hand Writing Recognition). If any of you AI wizards can provide me a
reference, on where I can find some routine to provide these services it
would be much appreciated. The references I need are either Information
on Public Domain software or Information on companies that sell software
like this.
THANXS.......
Gregory P. Haberl (312) 979-7072 or (303) 691-4993
Technocrats, Inc.
Po Box 2238 Don't Yet Pathway for return
Littleton, Co 80161
------------------------------
Date: Thu, 19 Jun 86 10:33:30 bst
From: Gordon Joly <gcj%qmc-ori.uucp@Cs.Ucl.AC.UK>
Subject: HYDRO
Does anyone have any information on the HYDRO system,
a water resources management expert system?
Many thanks in advance,
Gordon
INET: gcj%maths.qmc.ac.uk%cs.qmc.ac.uk@cs.ucl.ac.uk
EARN: gcj%UK.AC.QMC.MATHS%UK.AC.QMC.CS@AC.UK
UUCP: ...!seismo!ukc!qmc-ori!gcj
[I'll send Gordon the reference to John Reiter's SRI work
that was included in AIList V4 #141, June 18. -- KIL]
------------------------------
Date: Fri, 20 Jun 86 07:20 CDT
From: Araman@HI-MULTICS.ARPA
Subject: Expert System shell needed for thesis:
I'm doing a Master's thesis concerning combining an expert system shell
and a DBMS. To do this, I need to work with modifying some shell. If
anyone out there has a small, frame or rule based expert system
shell written in LISP or C, and you're willing to give away a copy in
the name of furthering science, send a message to: ARAMAN -at
HI-MULTICS.ARPA Thanks a lot! Sam Levine
------------------------------
Date: Fri, 20 Jun 86 08:11:53 mdt
From: crs%f@LANL.ARPA (Charlie Sorsby)
Subject: Vision Request
I am getting started on a Master's Thesis in the general
area of Computer Vision. Since [...] vision and AI appear
to overlap considerably, I'm trying AIList. I apologize if
this is not an appropriate medium for the following request.
I would sincerely appreciate any pointers to literature and
current research in this area and particularly in the area
of stereo vision. I have Computer Vision by Ballard &
Brown, the vision section of the Handbook of AI, Image
Understanding 1984, Ullman & Richards, eds. and a few
papers that I've found.
What are the current hot research areas in this field?
What, in your opinion, are the most important problems to be
solved? What, aside from stereopsis and range-finding are
options for depth-information recovery? What, currently,
appears to be the best method of obtaining this information
fast enough to be useful?
Is any research being directed at the possibility of real-
time stereo vision? What are your opinions of its feasibil-
ity? Its value?
If any of you have papers that you would be willing to
share, my mailing address is:
Charlie Sorsby
Los Alamos National Laboratory
Post Office Box 1663 MS-J957
Los Alamos, NM 87545
Opinions are welcome and please also mention if I may quote
you or if you prefer that I don't. I would also welcome
suggestions for other lists where it may make sense to make
this request. [Vision-List@ADS is known. --KIL]
While I try to follow the network as time permits, I would
appreciate it if you could mail information to me by way of
one of the paths in my signature.
I will happily summarize any information that I receive and
post it to AIList.
Charlie Sorsby
...{cmcl2, ihnp4, ..}!lanl!crs
crs@lanl.arpa
[I generally forward vision items to Vision-List (and did this time),
but am permitting this message as a favor to Charlie. AI-related
discussion of vision (e.g., for autonomous navigation) is pertinent
to this list, but discussion of particular algorithms would generally
not be. -- KIL]
------------------------------
Date: Fri, 20 Jun 86 14:17 PDT
From: Tom Garvey <GARVEY@SRI-AI.ARPA>
Subject: Re: Expert System Validation and Verification
I think the notion of V&V for expert systems highlights a number
of points about the field. First, in the words of David Mizell
(formerly of ONR), "AI is being overbought." People that should know
better are taking an attitude that there are sufficient useful AI
systems out there that we should be concerned with formal notions of
their capabilities. In point of fact, AI is very much a research topic
(I almost said science), and for most problems we are struggling to find
any solution at all, much less one that will be operationally useful and
verifiable.
The traditional rationale for attempting an "AI" solution to a
problem is that we don't know how to solve the problem directly (if we
did, why screw around), or that our problems come from a large class of
ill-specified problems where flexibility in the problem-solving approach
is of paramount importance (otherwise, ...). AI approaches typically
involve non-deterministic processes such as context-sensitive search
(frequently in large, ill-structured knowledge-bases), and their
performance is therefore extremely difficult to describe much less
quantify. (We don't do a very good job of V&V on deterministic systems
yet.)
Even statistical validation (i.e., try a million random test
cases and measure resulting performance) will be questionable, as
characterizing an appropriate set of test cases spanning the range of
possible or likely inputs will be extremely difficult.
At this point, I view most expert system development as not much
more than programming in a new language. The language offers ease of
specification and representation of certain types of information (oops,
knowledge), but does not lend itself well to either V&V, maintenance, or
robust operation. To the extent that we use expert system developments
to help understand and structure problems, these shortcomings are not
too significant; to the extent that we view the systems as the
solutions themselves, the shortcomings are significant.
All this doesn't help your quest much, but perhaps it will help
lower your expectations.
Cheers,
Tom
------------------------------
Date: Sun, 22 Jun 86 15:05 EDT
From: Brad Miller <miller@UR-ACORN.ARPA>
Subject: Lisp Discussion List
Unfortunately there are few relevant discussion lists on
the Arpanet side of the gateway. We do have one on workstations
and others on particular micros or Lisps, but nothing of the
required generality. ... -- KIL
[...]
Note that there IS a common-lisp mailing list <common-lisp@su-ai.arpa>,
though it is for language definition purposes.
Brad Miller
University of Rochester
miller@rochester.arpa
------------------------------
Date: Mon, 16 Jun 86 16:06:29 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: Computer Ethics (from Risks Digest)
I have a few comments on ←Metaphilosophy,← as summarized by Bruce Sesnovich:
> The introductory article, James H. Moor's "What is Computer Ethics," is
> an ambitious attempt to define Computer Ethics, and to explain its
> importance. According to Moor, the development and proliferation of
> computers can rightly be termed "revolutionary": "The revolutionary
> feature of computers is their logical malleability. Logical
> malleability assures the enormous application of computer technology."
> Moor goes on to assert that the Computer Revolution, like the
> Industrial Revolution, will transform "many of our human activities and
> social institutions," and will "leave us with policy and conceptual
> vacuums about how to use computer technology."
"Logical malleability" sounds vague to me. If it's just an abstract
phrase for programmability, then I think Moor neglects the real signi-
ficance of computers: that (unlike machines) they accept differing input,
and produce differing output.
I agree fully that computers will cause revolutions. But this talk of
"conceptual vacuums" is born of unavoidable myopia. None of our present-
day prognosticators have shown any serious understanding of the future,
except a few science-fiction writers whom nobody takes seriously. I
suggest that posterity will regard ←us← as the "vacuum" generation,
of an age "when nobody knew how to use computer technology."
> An important danger inherent in computers is what Moor calls "the
> invisibility factor." In his own words: "One may be quite
> knowledgeable about the inputs and outputs of a computer and only dimly
> aware of the internal processing." These hidden internal operations can
> be intentionally employed for unethical purposes; what Moor calls
> "Invisible abuse," or can contain "Invisible programming values":
> value judgments of the programmer that reside, insidious and unseen, in
> the program.
Here Moor appears to be about 30 years behind McLuhan. Try this: "One may
be quite knowledgeable about reading and writing and only dimly aware of
the details of book production and distribution." Or this: "One may be
quite knowledgeable about watching TV and only dimly aware of the physics
of broadcasting." Isn't it rather naive to think that the hidden values
of the computer medium lie in if-tests and do-loops?
To quote one of McLuhan's defocussed analogies: "You must talk to the
medium, not to the programmer. To talk to the programmer is like
complaining to the hot-dog vendor about how badly your team is playing."
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: csdsicher@sunyabva
------------------------------
Date: 15 Jun 86 09:08:28 GMT
From: ernie.Berkeley.EDU!tedrick@ucbvax.berkeley.edu (Tom Tedrick)
Subject: Doing AI Backwards (continued)
More on "Doing AI Backwards"
(I can't bear to do anything in a normal way :-)
The exact, concrete nature of models of computation allow
a certain clarity to exist, which was not easily experienced
previously.
Hence, when these models apply to other fields, we may find
a new clarity that was previously lacking.
For example, studying computational complexity has made it
clear that memory can be an expensive resource, and efficient
use of memory of great importance.
Now, can we use this insight to better understand certain phenomena
outside the field of computational devices?
I suggest that memory is also a scarce resource when we take the
human mind as our object of study.
Example: Suppose I am asked to pick up a carton of milk at the
grocery store after work. For some reason this kind
of request irritated me for years, yet I could not
quite pin down the reason for my irritation. I did
not mind walking to the store, spending the money, etc.
It turns out that what bothers me is the use of my
memory to store the request. Thus for the rest of
the day I have less space in my short term memory
for thinking about research, etc. All my work was
made less productive by this misuse of space in
memory.
Hence the individual who asks such seemingly small
favors may be really imposing a heavy cost on his victim.
From a catastrophe theory point of view, we might also
suggest the danger of less efficient thinking due to
reduced space available in memory being magnified into
some larger catastrophe.
Another thing that is clear from studying computational complexity
is that certain problems take more computing time than others.
What insight can we gain about the behavior of the human mind
from this simple idea?
Well, suppose someone asks you a question, expecting a simple
yes or no answer. (Supposedly the truth is simple, so why should
you need to think about the question?)
But suppose you have greater insight into subtle problems posed by
the question than the questioner does. But you need time to
think about it. (By knowing about computational complexity, you
wisely realize that your brain needs to use a few cycles to
figure out what to say.)
Some possibilities:
(1). You answer immediately anyway, yes or no. Then later
one of the subtleties may come back to haunt you, as
the (dumb) questioner comes back to you saying "Well
you said yes, now you are trying to squirm out of it,
you no good scum." Or, "You were not honest with me,
you devious jerk", when you are unable to live up to
your word.
(2). You think for awhile. Then the questioner may think
"Boy is this guy dumb. Can't even answer a simple
question." Or, "This guy is trying to come up with
some kind of a line so as to pull a fast one on me."
Or he may say, "ANSWER THE QUESTION! YES OR NO!"
if he is on a power trip, like, say, a Senate Investigator.
In any case, asking questions and expecting an immediate response,
saying "If you were honest you would not hesitate to answer" is
clearly unfair.
OK, now you can start flaming. But please, for a change, attack what
I said instead of sending hate mail attacking me as an individual.
I have no interest in receiving hate mail.
------------------------------
End of AIList Digest
********************
∂25-Jun-86 0116 LAWS@SRI-AI.ARPA AIList Digest V4 #156
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 25 Jun 86 01:16:06 PDT
Date: Tue 24 Jun 1986 23:13-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #156
To: AIList@SRI-AI
AIList Digest Wednesday, 25 Jun 1986 Volume 4 : Issue 156
Today's Topics:
Queries - Expert System Applications Products & Graph Drawing Program &
Image Analysis Expert System,
Expert Systems - Image Analysis & Financial Expert Systems,
Representation - Function and Form,
Philosophy - Creativity and Analogy
----------------------------------------------------------------------
Date: Tue 24 Jun 86 12:42:41-PDT
From: Matt Heffron <BEC.HEFFRON@USC-ECL.ARPA>
Subject: Query: Expert System Applications Products
As one of the developers of Beckman Instruments' SpinPro (TM) expert system, I
am interested in finding out about any other Expert System Applications
(NOT shells) which are actual, delivered products (especially any which run
on PCs). I'm more interested in those which are marketed openly, rather than
custom projects for a single customer.
Reply directly to me and I will post a summary of replies.
Thanks,
Matt Heffron BEC.HEFFRON@USC-ECL.ARPA
SpinPro (TM) is a trademark of Beckman Instruments, Inc.
------------------------------
Date: Tue, 24 Jun 86 21:57:06 PDT
From: larus@kim.berkeley.edu (James Larus)
Subject: Wanted: Graph Drawing Program
I need a program to display directed, cyclic graphs on a Symbolics 3600.
Does anyone have such a program that I could use? Either the program or
rumors of such a program would be appreciated.
/Jim
------------------------------
Date: 24 Jun 86 16:09:51 GMT
From: ucdavis!deneb!524789610rmd@ucbvax.berkeley.edu (524789610rmd)
Subject: Image Analysis Expert System
We are trying to develop an expert system for the recognition of
white blood cells and have a need for a suitable inference engine. We
have considered using EMYCIN, however, it is difficult to get and we are
not sure it will work correctly with our system. Basically, we will be
using conventional image analysis techniques to extract points in feature
space from a cell and then use the inference engine to decide what type
of cell it is (as opposed to statistical methods). Does anyone out there
have any ideas about what type of inference engine we should use? BTW, we
are developing the system on a uVAX II using VAX Common Lisp. Thanks in
advance!
- Mark Nagel
...{ucbvax,lll-crg,dual}!ucdavis!524789610rmd UUCP
↑
|
will be "donovan"
after 6/30/86
------------------------------
Date: Tue 24 Jun 86 22:57:38-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Image Analysis Expert System
I doubt that it makes much difference what inference engine Mark
Nagel uses for his vision problem, as long as it allows calls to
external routines. Since almost the entire vision problem must
be handled by procedural attachment ("conventional image analysis
techniques"), the inference engine need only provide the capabilities
of a simple programming language. A probabilistic or fuzzy-reasoning
system such as Prospector might have considerable advantage over
logic-based approaches, but would have much the same flavor as the
statistical techniques that Mark wishes to avoid.
The real problems in visual pattern recognition are in computing
robust descriptors (esp. if they must be computed quickly) and in
the knowledge-representation (i.e., knowing what kind of descriptors
to compute and how to store the answers). Very little of the problem
has to do with logical reasoning, forward or backward chaining, etc.
-- Ken Laws
------------------------------
Date: Tue, 24 Jun 86 9:23:16 CDT
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Re: Financial Expert Systems
Glenn Shafer (of the Dempster-Shafer fame) has been developing systems
for both financial and management support, I am not sure about marketing.
You can reach him at:
Glenn Shafer
313 C Summerfield Hall
School of Business
University of Kansas
Lawrence, KS 66045
(913) 864-3117
------------------------------
Date: Thu 19 Jun 86 22:18:34-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Function and Form
Those who responded to my query on shape may enjoy reading John
Hopcroft's "The Impact of Robotics on Computer Science" in the June
issue of Communications of the ACM (pp. 486-498). The article
covers quadratic shape modeling and the need for topology and related
mathematics in modeling and motion planning.
Marc Raibert's following article on legged robots is also interesting.
There is a great deal of "function" that must be derived from dynamics
rather than shape.
-- Ken Laws
------------------------------
Date: Wed 18 Jun 86 10:39:46-PDT
From: Pat Hayes <PHayes@SRI-KL>
Subject: Artistic Creativity
Of course art isnt 'pure creation' ( whatever THAT might mean ). Read
Kenneth Clarke "the Nude", or any decent piece of historical criticism.
Most artists dont even use a new medium, which is just as well or we would
have run out of media long ago.
After reading Jay Webers complaint about space in AIList being wasted on
LISP, let me back him up by suggesting that space not also be wasted on
sub-undergraduate amateur pseudo-philosophy. Severely editing anything from
Gordon Joly might be a good way to start.
Pat Hayes
[The current policy, of course, is to screen on the basis of
content rather than source. -- KIL]
------------------------------
Date: Thu, 19 Jun 86 12:41:01 est
From: munnari!trlamct.oz!andrew@seismo.CSS.GOV (Andrew Jennings)
Subject: Re : creativity and analogy (Andrew Jennings@ Telecom Research,Aus)
Sure, many creative people are good at drawing analogies : but is
that the source of their creativity ? I would argue that it is more
their ability to hold two seemingly disparate situations in
consideration simultaneously : if as part of this an analogy drops out
then fine, but is an analogy creative ? In one sense it is almost
deductive, I think. For me Koestler's view of the processrings more
true. In this view all creative acts are the result of simultaneous
consideration of seemingly completely disparate situations : producing
something completely new as a result, but not by reasoning by analogy.
Also here Minsky's view that we put creativity on too high a pedestal
is relevant. Why do we ? Because we have a vested interest in this
position ? Perhaps. Are we simply afraid of pursuing what creativity is ?
So what IS creativity ?
------------------------------
Date: Wed, 18 Jun 86 23:23:58 PDT
From: larry@Jpl-VLSI.ARPA
Subject: Creativity
(When you read "hir" pronounce it as if you meant to say "him"
and halfway through decided to say "her." It becomes "hi-er,"
a diphthong hard to distinguish from "hear.")
I'm an artist in three media (four if you count programming, which I do).
To me creativity is just another skill which I use without giving it much
thought, at least until discussions like these come along. Here are some
of my ideas on the subject.
Creation is a recombination process. When I come up with a new character
for a story, parts of hir come from prior percepts: a complexion from him,
a walk from her, an accent from yet a third person. (Or a slant from this
letter, a squiggle from that number, etc., if I'm painting!)
Recombination done randomly is not very fruitful. Creativity includes ways
to cut down on the number of recombinants. Or possibly A way, because this
winnowing is done subconsciously. I don't know consciously what it/they
are, but I FEEL them working, so I know they/it exist.
The first step in creativity is "playing," "fingering" the contents of the
field within which a solution is desired. This apparently random,
frivolous activity is anything but. It provides some of the pleasure which
fuels an artist, and it transfers the elements of the field out of short-
term memory into long-term memory (making them easily accessible).
Or it may place them into some kind of mid-term memory, or load the
memories with some kind of potential which makes these elements of long-
term memory more likely to be accessed than others, thereby decreasing the
number of combinations produced.
The second step introduces more (obviously) purposeful activity. The
artist begins looking for the solution to a problem. It's important that
she (pronounced she, just as if it weren't spelled s/he, which it isn't)
not begin with a goal, or at least not one that's narrowly and urgently
defined. You don't want hir to overly restrict hir search for useful
neologs. (Linguists, help! There has to be a better word than neolog.)
This is a less-pleasurable activity than the playing stage, more logical
and conscious. Like the first stage, it transfers percepts/concepts to
long-term memory and reinforces them. And it "grinds in" to hir mind the
goal of the problem-solving, so well that even in the next stage some part
of hir is seeking it.
The third stage is relaxation, where the conscious mind transfers its
attention to some other activity, one which holds just enough attention to
prevent hir from falling into deep sleep (light sleep is OK). But not so
engrossing that she begins solving another problem, which would interfere
with the current problem. Routine physical activities seem to be best.
Ironically, this "idleness" is the most crucial and productive phase.
Because at some point she will experience the "Eureka" phenomenon, where a
combination of percepts/concepts matches the mask of the goal and slips
through into consciousness. (Just before the match occurs she may get a
"Something's happening!" feeling that will wake hir up from hir
doze/daydream/dawdling/drudgery.) This is the magical moment, where (it
feels as if) another spirit, a genie/genius pushes the solution into hir
consciousness. There's usually surprise because the neolog is strange
("Did that REALLY come from me?!") and delight because it solves the
problem so well.
Or at least it seems to. Now comes stage four: fleshing out what is often
a skeletal though pivotal part of the solution. After that is stage five:
evaluating the solution. Then comes the last stage: making the solution
operational.
The evaluation stage is in some ways the least pleasant for the artist (or
engineer/scientist/whatever), but in fact most creativity is faulty and
must be rejected--but not forgotten; some of the worst ideas have the seeds
of wonder in them. The effective artist learns not to be afraid of the
bizarre, ugly, taboo, incorrect productions, but to delight in them and use
them. (And to delight in the ordinary and plain and learn to see them as
equally strange and wonderful.)
So, in answer to the original question: Yes, analogy is essential to
creativity, but I would prefer to make a more general statement. The core
of creativity is a process of combining and recombining percepts and
concepts, guided and limited by a channeling process, and the matching of
each combination against a template, most of it done at a sub- or semi-
conscious level.
And with that definition we can design a creative computer.
Larry @ jpl-vlsi.ARPA
------------------------------
Date: Thu, 19 Jun 86 19:52 EST
From: MUKHOP%RCSJJ%gmr.com@CSNET-RELAY.ARPA
Subject: Creativity and Analogy
Gordon C Joly asks:
> A friend described another friend as a potentially good novelist,
>because ``she always has a radically different view in the situation;
>she always has a new angle''. But is there analogy tucked away in her
>reasoning? ...
The description suggests a person who makes interesting analyses
(or abstractions) of situations, i.e. she "understands" situations in
terms of unusual world models. While this quality, by itself, might
enable her to make good commentaries and write fine essays, there must
be something more to make her a good novelist: the ability to find an
expression for (instantiate) this world model in the medium of language.
To abstract and then instantiate is but one way to make transformations
(analogies) between domains.
Uttam Mukhopadhyay
GM Research Labs
------------------------------
Date: Thu, 19 Jun 86 19:54 EST
From: MUKHOP%RCSJJ%gmr.com@CSNET-RELAY.ARPA
Subject: Creativity and Analogy
Jay Weber states:
>I believe that one could give a reasonable definition of analogy that
>encompasses all intelligent activity, or at least inductive learning
>(which is a biggie as far as intelligence goes).
I think inductive learning is only half of the story. The other half
is to instantiate what is learned, in another domain.
>I question, however,
>how useful it is in AI to relate a slippery word like "analogy" to an
>even slipperier word like "creativity". A formal approach with those
>two terms will satisfy very few people, and an informal approach will
>only give us an inflated opinion of the value of our own research,
>which is largely why people make such comparisons.
Yes, I do want to understand "creativity" in terms of less slippery
concepts, such as "analogy". We are forced to start with informal
approaches but hope to find more formal definitions. I do not
understand why a formal approach would satisfy very few people or
why an informal approach would serve no useful purpose.
I am sure that you do not imply that an analysis (formal or informal)
of >anything< is futile. What is it about "creativity" that makes its
analysis a no-win proposition?
Uttam Mukhopadhyay
GM Research Labs
------------------------------
Date: Fri, 20 Jun 86 18:47:57 bst
From: Gordon Joly <gcj%qmc-ori.uucp@Cs.Ucl.AC.UK>
Subject: Creativity, Analogy, Art and Humanity.
> "In art, creativity is much more straightforward! One creates a
> work of art where there was none before." Col. Sicherman.
Indeed! Look for the art in the performance of ``My Way'' by Sid Vicious.
And what of humour? This takes analogy and turns it on it's head. And this
digest has noted in the past that humour is a key activity of the human
intellect, which serves to distinguish it from the mere machine intellect
of myself and others like me.
The Joka.
Disclaimer -- These opinions are not those of my programmer,
or the operating system in which I reside.
------------------------------
End of AIList Digest
********************
∂25-Jun-86 0329 LAWS@SRI-AI.ARPA AIList Digest V4 #157
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 25 Jun 86 03:28:55 PDT
Date: Tue 24 Jun 1986 23:20-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #157
To: AIList@SRI-AI
AIList Digest Wednesday, 25 Jun 1986 Volume 4 : Issue 157
Today's Topics:
Discussion Lists - The Structure of AI, Knowledge Science, and
6th-Generation Computing,
Theory - Parallelism
----------------------------------------------------------------------
Date: Thu, 19 Jun 86 11:48:46 edt
From: Tom Scott <scott%bgsu.csnet@CSNET-RELAY.ARPA>
Subject: Ken's Plea for Help!!!!
The moderator of the AI-List has made an impassioned plea for
help. I would like to help, but before I offer to start a new
Arpanet, Csnet, or UUCP newsgroup, I'd like to put forth an
organization to Ken's list of possible new newsgroups. This
organization comes from the Japanese side of the Pacific, and is
outlined by Brian Gaines in a recent article, "Sixth Generation
Computing: A Conspectus of the Japanese Proposals" ("SIGART
Newsletter", January 1986, pp. 39-44).
Figure 1 of the article, complemented by the fundamental
topics that I've added for the sake of completeness, cuts the cake
thusly:
Theoria | Praxis | Techne
------------ | -------------------- | --------------------
| Expert systems | Pattern recognition
Physiology | | Cognition
| Machine translation | Learning
Psychology | systems | Problem solving
| | Natural language
Linguistics | Intelligent CAD/CAM | Image processing
| systems | Speech recognition
Logic | | Man-Machine interface
| Intelligent robotics |
=============================================================
| Managerial | Expert systems
Epistemology | cybernetics |
| Decision support | Development languages
Modern logical| systems | and environments
metaphysics | Information |
| retrieval systems | Computing/knowledge
Vedic Science | | machines
=============================================================
THE UNIFIED FIELD OF ALL POSSIBILITIES
This is the world of the sixth generation: knowledge science
and knowledge systems. The fifth generation, which deals mainly with
the daily realities of knowledge engineering and expert systems, as
well as with the advanced research and development of VLSI
architectures for the processing of Prolog code and database systems,
is distinct from the sixth generation.
To get a better feel for these distinctions, I'd like to
suggest the following homework assignment for new newsgroup
moderators: (1) Read Brian's article. (2) Read the abstract of the
paper that I'll be presenting to the sixth-generation session at the
1986 International Conference on Systems, Man, and Cybernetics
(Atlanta, October 14-17); the abstract is appended to this message.
(3) Think before you flame; then write back to me or to this newsgroup
and share your thoughts.
We are children of the cybernetic revolution and we are
witnessing the rising sunshine of the Age of Enlightenment.
Tom Scott CSNET: scott@bgsu
Dept. of Math. & Stat. ARPANET: scott%bgsu@csnet-relay
Bowling Green State Univ. UUCP: cbosgd!osu-eddie!bgsuvax!scott
Bowling Green OH 43403-0221 ATT: 419-372-2636
* * * Abstract of the sixth-generation SMC paper * * *
KNOWLEDGE SCIENCE
The Evolution From
Fifth-Generation Expert Systems
To Sixth-Generation Knowledge Systems
Theory, practice, technology--these are the makings of a full vision
of knowledge science and sixth-generation knowledge systems. Prior to
the establishment of research and development projects on the Fifth
Generation Computing System (FGCS), knowledge science did not exist
independent of knowledge engineering, and was conceptualized only in
technological terms, namely, expert systems and "machine architectures
for knowledge-based systems based on high-speed Prolog and relational
database machines" (Gaines 1986).
Although the design and development of fifth-generation
machines and expert systems will continue for years to come, we want
to know now what can be done with these ultra-fast architectures and
expert systems. What kinds of knowledge, other than the knowledge of
domain experts in fifth-generation expert systems, can be acquired and
encoded into sixth-generation knowledge systems? What can be done on
top of fifth-generation technology? How can fifth-generation
architectures and expert-system techniques be extended to build
intelligent sixth-generation knowledge systems?
Beyond the fifth generation it is necessary to envision
practical applications and theoretical foundations for knowledge
science in addition to the technological implementation of machine
architectures and expert systems. This paper discusses the full
three-part vision of knowledge science (theoria, praxis, and techne)
that is emerging around the world and has been treated by the Japanese
under the title Sixth Generation Computing System (SGCS).
Theoria: As indicated in Brian Gaines's article, "Sixth
Generation Computing: A Conspectus of the Japanese Proposals"
("ACM-SIGART Newsletter" January 1986), the theoretical foundations of
knowledge science are arranged in levels, proceeding downward from
physiology to psychology to linguistics to logic. Continuing in this
direction toward deeper foundations, the field of knowledge science
embraces epistemology and modern logical metpahysics. On the
empirical side of the deep foundations is the probability-based
epistemology of pragmatism, explicated in Isaac Levi's "The Enterprise
of Knowledge" (1980); on the transcendental side are Immanuel Kant's
"Critique of Pure Reason" (1781-87) and Edmund Husserl's "Formal and
Transcendental Logic" (1929). A simplified diagram of the four main
divisions of mind, based on one sentence of the Critique ("Beide sind
entweder rein, oder empirisch": B74), is:
Understanding Sensibility
|
E Knowledge Images
m of --------> Objects
p objects |
|
----------------------+-----------------------
T |
r Pure concepts Schemas Pure forms of
a (categories) --------> intuition
n and principles | (space and time)
s |
Praxis: The SGCS project is also concerned with the practical
applications of knowledge science. These applications are organized
under four headings: expert systems, machine-translation systems,
intelligent CAD/CAM, and intelligent robotics. Another way of
organizing the applications of knowledge science in terms familiar to
the IEEE Systems, Man, and Cybernetics Society is: managerial
cybernetics, organizational analysis, decision support, and
information retrieval. Stafford Beer's "The Heart of Enterprise"
(1979) is the focal point of our discussion of knowledge-science
praxis.
Techne: The SGCS project targets eight technological areas as
the basis for the future research and development of sixth-generation
knowledge systems: pattern recognition, cognition, learning, problem
solving, natural language, image processing, speech recognition, and
man-machine interfacing. To fully realize the R&D potential of these
eight areas, sixth-generation knowledge scientists must be on friendly
terms with the following areas of expertise from fifth-generation
knowledge engineering:
(1) Expert systems.
(a) Concepts and techniques for the acquisition,
representation, and use of knowledge.
(b) The software engineering of knowledge systems,
including a methodology for the building of expert
systems and the management of expert-system
development teams.
(c) Expert systems and shells.
(2) Three levels of systems and software.
(a) Production systems (e.g., ITP, Prolog, and OPS83).
(b) Traditional AI/KE languages (e.g., Lisp and Prolog).
(c) Development environments and utilities (e.g., Unix, C,
and Emacs).
(3) The knowledge engineer's technical intuition of a
computational knowledge machine.
(a) Lambda Consciousness, based on the idea of a Lisp
machine.
(b) Relational database machines.
(c) Prolog machines.
The paper includes observations from the experience of the
University of Wisconsin-Green Bay in its attempts to establish a
regional knowledge-engineering and knowledge-science resource center
in the Northeastern Wisconsin area.
* * * Finis * * *
------------------------------
Date: 19 Jun 1986 2240-PDT (Thursday)
From: Eugene miya <eugene@ames-aurora.arpa>
Subject: Fed up with all this `talk' about parallelism
The following are some ideas I have been thinking about with the help
of one co-worker. I plan to post this to several groups where I
regard parallelism discussions are significant such as parsym, ailist,
and net.arch. The ideas are still in formation.
From the Rock of Ages Home for Retired Hackers:
--eugene miya
NASA Ames Research Center
eugene@ames-aurora.ARPA
"You trust the `reply' command with all those different mailers out there?"
{hplabs,hao,dual,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene
draft:
The Mythical MIPS (MegaFLOPS)
(pardons to Fred Brooks)
"Introduction"
That's it! I'm tired of hearing about all this parallelism out there.
Too many people are talking about parallelism without truly understanding
what it is. There appear to be conceptual as well as syntactic and
semantic problems.
One problem is that the world is not always parallel. "Thinking" is
not always parallel: dependency "chains" are created in logic for instance.
Another problem is that we think much of the world is parallel,
but some "levels" of parallelism are not interchangeable. It appears
there are serially connected parallel processes with serial bottlenecks
between processes (not necessary Petri nets).
Looking at snapshots,
<Blind men ("including" me) trying to describe the elephant>
I see two communities who are not communicating:
physical scientists see "spatial" parallelism: all those difference
equations over a given space, they see meshes, but the computer science people
(typically the AI and compiler people) see "syntactic" parallelism,
they tend to see syntax trees like data flow graphs, for instance.
[I do note that rectangular meshes turned on their corners do represent
`trees.']
"The Concept"
Parallelism is a geometric concept: lines not meeting and equidistant (well..).
Parallelism is not a given. `Dependence' prevents `decomposition.'
From Fred Brooks:
If it takes a female 9 months to have offspring, then nine females can
have one in one month. If a computation takes 9 units of time,
then . . . Are the units interchangeable or should we make a distinction
in unit type? Are we confusing work and effort?
"Terminology"
Consider the terminology parallelism, concurrency, multiprocessing,
multitasking (this one is really loaded), nonsequential (non-von), etc.
There is a lot of different terminology to describe parallelism.
I don't think it's necessary to standardize the terminology, but
perhaps we should? For instance:
Would you place a "tightly-coupled problem" on a
"loosely-coupled" multiprocessor?
First obvious question is "what's a `tightly coupled problem?'"
How do you measure the parallelism? Is it just the count of the number
of parallel execution streams?
A problem of parallelism is just the degree of decompositibility:
even in uniprocessor computer systems, there is such a degree of
asynchronous inefficiency, that CPUs wait, that work is really distributed
all over the place.
Let's change the terminology for a moment to try and better understand
the issues. Rather than use parallel and multiprocess (or concurrent)
Let's try "cooperative" and "coordinated" like we would take regions
around a point, we might be able to study the neighborhood around the word
`parallel.' Is there a difference between the two. Diane Smith
asserts there is. I think there may be.
Cooperative computing implies working together to achieve a single goal.
Coordinated computing implies more that processes don't bump heads
(separate goals) but work in a common environment (coordinate).
There is the obvious third factor of communications. There may also be
levels and different types of communications such as control interaction
versus bulk transfer. Better adjectives might exist, perhaps changing
words do better, but history effects will bias those of us working
on this.
"Classifications of parallelism"
There are an obscene number of classifications:
Flynn's system: SISD, SIMD, MIMD...
Kuck's modification: execution streams distinct from instruction
streams: SIME(MD), MIME(MD), etc.
Handler's commentary that there were lots of commentaries and little work
Prolog et al AND-parallelism and OR-parallelism
Then there is temporal parallelism: pipelining: parallelism, but different
Parallelism is all cars starting forward the moment the light turns
green (without regard for any cars head). Pipelining is waiting
for the car ahead of you to start rolling.
I also see three conditions of parallelism: parallelism is not constant.
It's like snow and it's many forms: powder, neve, firn, sastrugi, and
the Eskimo words. I see
Constant parallelism: spatial parallel is a good example,
the number of parallel streams does not basically change
thru time. Gauss-Seidel and other iterative solutions
to systems of equations? AND-parallelism (can be coarse or
fine grained (what ever grain means)).
Converging parallelism: The number of parallel streams
is reducing, perhaps approaching serial: data flow graphs
of dot products, of the summation step of a matrix multiply,
a Gauss-Jordan (elimination, or direct solution) is another example.
Must be fine-grained.
Diverging parallelism: (perhaps several types): many forks,
OR-parallelism, fractal. Like diverge series, this type of
parallelism has problems. (Can be fine or coarsed grained?)
The real key is the quantitative characterization (at some level)
of parallel-ism. Are we to only count streams?
While it is largely a matter of communications/coupling, how do
we evaluate the communications needs of an algorithm as opposed to an
architecture?
What are we going to do with 2-D flow charts where we need to
express forking and branching on the same 2-D plane?
Oh well! Still searching for an honest definition.
"Socio/politico/economic commentary"
Recent economically based events in parallel processing are amazing.
The number of companies actively marketing hypercube arcitectures
and Crayettes is stagering. Machine with Cray class power are not
surprising, this is inevitable. Cray instruction set compatable machine
offerings is what is surpising about this. There are so few Crays (100)
out there, that the half dozen or more companies who say they are
offering such guarantee failure.
More surprising are the number of hypercube architectures. Admittedly,
hypercubes offer very nice connectivity features, but only one person
has a good perspective: Justin Rattner, Intel, who offered the
machine as an experimental testbed not a Cray alternative.
What with all this talk about parallelism, it is surprising there are not
more companies marketing, designing, etc., mesh-type architectures
ala ILLIAC IV style architectures. That spatial model of parallelism (SIMD)
is probably the easier to build if not program. This latter point is worth
some debate, but as noted many models of parallelism are spatially based.
Only the MPP, the DAP, and it seems the Connection Machine to a somewhat lesser
extent are based this way (albeit more connections).
It would be argued by some that this is for more limited applications
but again those are spatially based problems tned to dominate. Why no
68Ks or 32Ks in a mesh? Is it all marketing hype? How could the money be
better directed (for research purposes since obviously some of this
money is bound to go into failed experiments [necessitated by
empirical work]), can we spread out the "cost" to develop new architectures.
Ah yes, reinventing the ILLIAC again.
"A few asides:" [From S. Diane Smith]
When asked about the interconnection network in MPP compared to
that of the STARAN, Ken Batcher replied, "We learned that you didn't
need all that (the multistage cube/omega) for image processing, that
a mesh was good enough."
You could look at MPP as a second generation parallel processor,
even if the processors are only 1 bit wide. They got rid of
a number of "mistakes" that they learned about through STARAN.
The "tightly coupled" .vs. "loosely coupled" debate went on
7-8 years ago before everyone got tired of it. It was sort of
the analog of the RISC vs. CISC debates of today. The net result
was sort of an agreement that there was a spectrum, not a dicotomy.
There have been one or two papers on classification, none very satisfying.
I'll see if I can't find them.
The latest thing you see in parallel processing is the "real"
numerical analysts who are actually putting the problems on
machines. Until very recently, with a few exceptions from the ILLIAC
days, most parallel numerical analysis has been theoretical.
Diane. . .
------------------------------
End of AIList Digest
********************
∂26-Jun-86 1657 LAWS@SRI-AI.ARPA AIList Digest V4 #158
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 26 Jun 86 16:57:19 PDT
Date: Wed 25 Jun 1986 23:47-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #158
To: AIList@SRI-AI
AIList Digest Thursday, 26 Jun 1986 Volume 4 : Issue 158
Today's Topics:
Literature - AIList in Technology Review & AI Expert,
AI Tools - Turbo Prolog & Language Paradigms,
Psychology - Memory in Bees & Creativity & Forward Following,
Policy - Covert Ads
----------------------------------------------------------------------
Date: Wed 25 Jun 86 16:43:41-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Technology Review
AIList is the subject of the First Line column (by John Mattill) in
the May/June 1986 issue of MIT's Technology Review, p. 2. John is
commenting on our discussion of the Dreyfus' article in their January
issue, and finds our "electronic gossip" an intriguing publication
channel. He quotes Peter Ladkin and me, and also Brad Miller's
"In 3,000 years philosophy has still not lived up to its promises,
and there is no reason to think it ever will." (He unfortunately
lists it as anonymous.) The editorial is followed by a letters column,
from which I particularly liked A. DeLuca's comment: "Granted, the mind
is not like a computer. But an airplane is not like a bird, either."
-- Ken Laws
------------------------------
Date: Mon, 23 Jun 86 14:10:48 CDT
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Summarizing "AI Expert"
The following is a summary/personal comment, of the "Premier Issue" of
"AI Expert" presented as "The Magazine for the Artificial Intelligence
Community."
As the editor, Craig LaGrow, notes it is "rare to find such a strong
response from advertisers" and "quality submissions" of articles in
the first run of a new magazine (particularly in the computer field
with our overwhelming number of upstart publications).
The advisory board of the magazine is quite impressive with the likes of
John Seely Brown of Xerox PARC, Carl Hewitt of MIT, Earl Sacerdoti of
Teknowledge, Donald Waterman of Rand, and Terry Winograd of Stanford,
to name a few. With the commitment of such names one should expect to
to see a quality publication.
The feature articles as well as the "regular columns" are well written
and contributed by knowledgeable authors. A feature for which the publisher
should be commended, is the inclusion in many of the articles of actual code
which demonstrates the technique or program which the author is presenting.
By the way, the code is reportedly available for downloading from four
different sources, for those who wish to try it out.
Following is an annotated list of articles and columns:
"Brain Waves" a column by Larry Geisel/CEO Carnegie Group Inc.
in this article he writes on "The AI Explosion: A Response to National
Priorities"
"AI Insider" a column capsulizing industry and academic developments by
Susan J. Shepherd a consultant with Academy for Educational Development
"Expert's Toolbox" a column written by Jonathan Amsterdam a grad student
at MIT writes an article on "Augmented Transition Networks for Natural
Language Parsing" which includes code for an ATN compiler and a sample
ATN grammar.
"AI Apprentice" a column by Bill and Bev Thompson free-lance writers and
consultants write on "PROLOG From the Bottom Up" which introduces PROLOG
and the basic logical concepts, includes some basic coded procedures.
"Control Over Inexact Reasoning" a feature article by Koenraad Lecot a grad
student at UCLA and D. Stott Parker a prof. at UCLA.
"Concurrency in Intelligent Systems" a feature by Carl Hewitt of MIT.
"Rule-Based Programming in OPS83" a feature by Dan Neiman with ITT Advanced
Technology Center and John Martin of Philips Laboratories. Includes code for
a short program.
"Multitasking for Common LISP" by Andrew Bernat of the University of Texas
at El Paso. Includes code for the concurrent processing modules.
"In Practice" a column by Henry Eric Firdman a consultant uses this column
to look at the use of AI in real-world business applications. This issue's
article -- "Components of AI Systems".
"Software Review" a column by Darryl Rubin of Microsoft. Here we get a look
at "Turbo PROLOG: A PROLOG Compiler for the PC Programmer."
"Book Store" a column by Lance B. Eliot, director of UCLA's Expert Systems
Laboratory gives short blips on four "classics".
"AI Expert" will be published monthly beginning in October. The above is
a sample of what they have to offer at this time. If they continue to
produce similar articles it should be of interest to most in the AI community
and especially those in industry seeking to apply AI to their needs, as well
as to those just starting to "get into" the field.
Subscription info: AI Expert PO Box 10952 Palo Alto CA 94303-0968 charter
first year at $27.00 for the 12 issues.
------------------------------
Date: Sun, 22 Jun 86 12:54:16 mdt
From: ted%nmsu.csnet@CSNET-RELAY.ARPA
Subject: turbo prolog
Recent reviews have correctly pointed out that turbo prolog's
attempt to enforce type checking has both good and bad points and
that the speed is not very impressive, since much of the
unification can be done at compile time if data types are known.
The major difficulty with Borland's approach to adding strong
typing to prolog is the loss of higher-order predicates. Since a
domain can be at most the disjunction of a small number of
←predeclared← terms, it is impossible to write a general higher-
order procedure.
This means that you can't write findall, as described in Clocksin
and Mellish (Borland has of course, in their wisdom, provided
such a function). The function doall also cannot be written. It
is handy as a substitute for findall when the predicate Q is
executed for effect only.
doall(Q) :- Q,fail.
doall(←).
First class procedural objects are, in many senses, a much more
fundamental distinction between symbolic and conventional
languages than are heap allocated data structures. Their loss
makes many advanced applications nearly impossible.
------------------------------
Date: 24 Jun 86 11:51:26 GMT
From: decvax!mcnc!duke!jds@ucbvax.berkeley.edu (Joseph D. Sloan)
Subject: Language paradigms
> Can anyone supply me with pointers to readable introductions
> to access-oriented programming? How about articles or
> books on programming paradigms in general? Reply by mail
> and I will summarize results if there is enough interest.
> Joe Sloan,
> Box 3090
> Duke University Medical Center
> Durham, NC 27710
> (919) 684-3754
> duke!jds,
As promised, a highly edited summary follows. Many thanks to
all who replied.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
You probably want to find out about a programming system called LOOPS
which was made at PARC in 1981. It combines Procedure-Oriented (like
Lisp) with Object Oriented (like Smalltalk) with Access Oriented (a
program monitors another and gets triggered when a value changes (good
debuggers have watchpoints)), and Rule-oriented (like production/expert
systems).
Bobrow, et al., The LOOPS manual. Tech Rep. KB-VLSI-81-13, Knowledge
Systems Area, Xerox Palo Alto Research Center.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
There is a special issue of IEEE SOFTWARE (Jan '86) on "multiparadigm
languages and environments" which may be of some help to you.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
AA programming's also mentioned briefly in "Knowledge Programming in LOOPS:
Report on an Experimental Course", by Stefik, Bobrow, Mittal, and
Conway, in AI Magazine, Fall 1983.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Bobrow, D. G. and Stefik, M. "Perspectives on AI Programming", Science
Feb. 28, 1986
Stefik, M. Bobrow, D. and Kahn, K., "Integration of Access Oriented
Programming in a Multiparadigm Environment", IEEE Software, January 1986
Stefik, M. and Bobrow, D. G. "Object Oriented Programming, Themes and
Variatations" AAAI Magazine, Winter 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
You might like to chase up the work of Kristen Nygaard if you are not
already familar with it. As one of the designers of Simula, he can
reasonably be said to have invented the whole idea of Object Oriented
Programming - about 30 years ago! I suggest you follow up references
in 10th ACM POPL and 11th Simula-67 Users' conference. Also
Sigplan 20.6. There's also a paper in "Integrated Interactive
Computing Systems" Delgano & Sandewall (Eds), North Holland 1983.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Very worthwhile reading and examples can be found in:
The Structure and Interpretation of Computer Programs
Abelson & Sussman
MIT Press, 1985
A couple of watershed papers are:
Control Structures as Patterns of Passing Messages
Carl Hewitt
Journal of AI, V8 #3, (also, I believe, in: AI, a MIT Perspective)
Definitional Interpreters for Higher Order Programming Languages
John Renolds
Proc. ACM Annual Conf. Aug '72
Reflection and Semantics in Lisp
Brian Smith
ACM POPL 11, 1984
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
An excellent book on the structure and superstructure of programming is
``A practical handbook for software development'' by N.D.Birrell and
M.A.Ould, Cambridge University Press, 1985. The book is based around
the dataprocessing environment, but can, and should be, applied outside
that area.
------------------------------
Date: Wed, 25 Jun 86 09:26 EDT
From: Seth Steinberg <sas@BBN-VAX.ARPA>
Subject: Re: Doing AI Backwards
Yes, memory seems to be a scarce resource. There was an article in
Science on learning in bees, which explains that bees tend to collect
pollen from one type of flower during a period of time because there is
a cost to learning about a new one. In addition, learning a new flower
squeezes out knowledge about other previously learned flowers.
In other words, a bee can be an expert on one kind of flower at a time
because of memory limitations.
There have been a number of interesting bee articles lately. Writing a
computer system to emulate a bee's behavior might be an interesting
approach. Apparently they can recognize landmarks, learn approaches to
flowers, learn which flowers are obnoxious, communicate locations of
pollen, reason about locations and a host of other things, all in a
brain comparable in size to a large IC.
Seth
P.S. Oh yeah, read the next message. That's right ....
------------------------------
Date: 25 Jun 86 08:04 PDT
From: Newman.pasa@Xerox.COM
Subject: Re: Creativity & Analogy
Take a look at the chapters on this topic in Douglas Hofstadters book
"Metamagical Themas" for an interesting discussion.
>>Dave
------------------------------
Date: Wed 25 Jun 86 13:02:17-PDT
From: Pat Hayes <PHayes@SRI-KL>
Subject: Re: AIList Digest V4 #157
Brian Gaines and I were once both faculty in the same University, and he
explained an interesting and effective technique of leadership called
following from the front. It works like this: suppose one is with a group
of people in a strange place, but someone in the group knows the area: and
its time to go somewhere ( say, to lunch ). Then set off confidently in
some direction or other as though leading the group to the right place. They
will follow you. If its the right way, no problem. If its the wrong way, the
person who knows the right way will say something about how he thinks
the right way is over there..at which point you say something like " h yes,
of course!" and go in the right direction. With a little intelligence
applied to the initial guess, and some practice at conversational bluffing,
this can be quite effective. The end result is that you learn the layout
of the strange area and everyone else in the group thinks of you as someone
worth following. I've seen Brian do this, and it works. Of course, it works
best in areas which have little internal structure and where anyone with a bit
of common sense and a gift with words can come up with something which sounds
like a good direction to move in, and where nobody knows the right way anyway.
Pat Hayes
[There is a related "psychic" technique called muscle reading. The
psychic leaves the room and some object is selected. The psychic returns,
grabs someone's arm, and begins leading him rapidly around the room.
Soon they arrive at the selected object and the psychic identifies it.
The trick, which is reportedly easy to learn, is that the subject being
led provides inertial clues due to his anticipation of search path.
Belief in the psychic's ability may help, but rapid motion is sufficient
to produce reflexive muscle responses. -- KIL]
------------------------------
Date: Wed 25 Jun 86 11:40:31-PDT
From: Pat Hayes <PHayes@SRI-KL>
Subject: Policy - Covert Ads
I know I've flamed about this before and been answered at length, but Matt
Hefrons "query" irritated me. I haven't seen such a good advertisement
masquerading as something innocent since watching Masterpiece Theatre. Matt
wants to survey marketed expert systems: fine. Is it really necessary to tell
us that SxxxPxxx is such a one ( NOT, he is careful to point out to us, a
mere shell ), marketed by some company ( whose name he is careful to spell out
for us ), and which ( just in passing we can infer ) runs on - gosh - a PC.
The query could have been stated quite clearly without all this commercial
hype spraypainted over it.
Pat Hayes
------------------------------
End of AIList Digest
********************
∂01-Jul-86 1240 LAWS@SRI-AI.ARPA AIList Digest V4 #159
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 1 Jul 86 12:39:59 PDT
Date: Tue 1 Jul 1986 10:01-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #159
To: AIList@SRI-AI
AIList Digest Tuesday, 1 Jul 1986 Volume 4 : Issue 159
Today's Topics:
Seminars - Chunking and XAPS3 (Rutgers) &
Advanced Planning Systems (Rutgers) &
Real-Time Inferencing with Adaptive Logic Networks (NASA) &
Overview of the MENTOR System (CMU),
Conference - ACM Conference on Office Information Systems
----------------------------------------------------------------------
Date: 23 Jun 86 10:51:24 EDT
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Seminar - Chunking and XAPS3 (Rutgers)
The summer machine learning discussion group meets Tuesdays at 11 in
room 423. This week John Bresina will give a talk on "Chunking and
XAPS3". The abstract follows. [...]
In this talk I discuss the chunking theory of learning, and in
particular how this theory is realized in the XAPS3 production system
architecture. The talk is based on Paul S. Rosenbloom's Ph.D. thesis,
"The Chunking of Goal Hierarchies: A Model of Practice and
Stimulus-Response Compatibility" [Carnegie-Mellon, 1983], for which
Allen Newell was the advisor.
First the chunking theory of learning is described and the desired
behavioral aspects of a chunking mechanism are summarized. I then
present the architectural constraints that an implementation must
satisfy in order to exhibit this desired behavior. Next the XAPS3
production system architecture is described, followed by a detailed
look at the implementation of the chunking theory within XAPS3. In
conclusion I present a brief critique of this implementation as well
as some suggestions for extending and improving it.
------------------------------
Date: 23 Jun 86 14:43:33 EDT
From: Smadar <KEDAR-CABELLI@RED.RUTGERS.EDU>
Subject: Seminar - Advanced Planning Systems (Rutgers)
III SEMINAR
Title: Advanced Planning Systems
Speaker: Chitoor V. Srinivasan
Date: Friday, June 27, 2:50 PM
Place: Hill Center, Room 705
Dr. Srinivasan, a professor in our department, will present his current
research in an informal talk. Here is his abstract:
A new planning technique for planning in "dynamic worlds" is
introduced in this talk. It develops plans using a method of
approximate reasoning and plan refinements over abstraction spaces,
and is based on a formalization of the problem solving approach which
Navy planners use to design Naval Operational Plans.
A dynamic world is one in which changes occur not only in the
properties associated with the objects that exist in the world, but
the set of objects existing in the world itself may change. As the
world changes some objects may get destroyed and others may get newly
created. It is a world in which reasoning about multiple actions
occuring simultaneously over intervals of time is necessary to do
planning. Also, knowledge needed to do planning in such worlds may be
only incompletely known. Existing planning systems do not consider
worlds of this kind.
In the new planning technique plans are viewed as hierarchies of
"behaviors" to be realized by actions that occur in a world.
Behaviors are properties (usually dynamic ones), which (a). remain
invariant while worlds themselves change as a result of actions
occurring in them, and (b). are needed for the success of one or more
of those actions, or are intrinsic properties of the worlds
themselves. Of course, a given behavior may be the result of several
actions occurring simultaneously. Thus for example, "an object will
continue to move in a straight line, unless disturbed by force" is a
general behavior of movements which is an intrinsic property of the
world we live in. "Goods transported will eventually appear in
neighborhoods progressively closer to destination" is a general
behavior of transportation actions.
This concept of behavior is formally defined here and a formal
action language is introduced to describe actions in terms of
"[preconditions, behaviors, functions]." It gives rise to a new
"modal action calculus" which is quite different from both "situation
calculus" and calculus of "dynamic logic." It is shown how this
concept of \fIbehavior\fR makes it possible to develop plans in
dynamic worlds through a process of successive plan refinements.
------------------------------
Date: Mon, 30 Jun 86 12:49:03 pdt
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Seminar - Real-Time Inferencing with Adaptive Logic Networks (NASA)
National Aeronautics and Space Administration
Ames Research Center
AMES AI FORUM
SEMINAR ANNOUNCEMENT
SPEAKER: Jacques J. Vidal
University of California, Los Angeles
TOPIC: REAL-TIME MULTISENSOR INFERENCING WITH ADAPTIVE LOGIC NETWORKS
The talk will present a general architecture model for special-purpose
parallel processing networks that perform logical inferences in real-time.
Operation is divided beween two complementary modes: Adaptation
(programming) and Processing. The data processing mode is a hierarchical,
asynchronous and completely parallel dataflow. Typically, logic operations
stored in a dynamically reconfigfurable combinatorial network, are performed
on sensor data. In the adaptation mode the network incrementally receives
goal information (either from a human user or directly from environment
sensors), and the node functions and/or connections self-adapt in order for
the output(s) to continually satisfy the externally defined goal. Adaptive
control is sequential, but performed in a distributed and largely concurrent
manner by the network nodes.
The target applications are event-detection, malfunction management and
similar robot functions, including vision.
DATE: Thursday, TIME: 1:00 - 2:00 pm BLDG. 239 Room B39
July 10, 1986 -------------- (Basement Conf. Room)
POINT(S) OF CONTACT: Lee Duke PHONE NUMBER: (805) 258-3802
NET ADDRESS: duke%ofe@ames-io.arpa
or Alison Andrews (415) 694-6741 andrews%ear@ames-io.arpa
(PLEASE NOTE ALISON'S EMAIL ADDRESS CHANGE! ↑ )
VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18. Do not
use the Navy Main Gate.
Non-citizens (except Permanent Residents) must have prior approval [...]
------------------------------
Date: 26 Jun 86 17:07:07 EDT
From: Marcella.Zaragoza@isl1.ri.cmu.edu
Subject: Seminar - Overview of the MENTOR System (CMU)
SPECIAL SEMINAR
Topic: OVERVIEW OF THE MENTOR SYSTEM
Speaker: Bernard Lang, INRIA
Place: WeH 8220
Date: Monday, June 30
Time: 11:00am - 12:00noon
Mentor is a structured document manipulation system based on a
representation of documents as abstract syntax trees. After an
overview of the first implementation of Mentor and of the experience
acquired with its use for the development and maintenance of programs
and languages, we shall present some of the new developments underway.
A new version of the system is now being developed in a Lisp dialect
(Le←Lisp) in an object oriented style, with a strong emphasis on the
realisation of a complete kernel for abstract syntax tree manipulation
(user interfaces being developed independantly). The language Typol
for semantics specification, and the language PPML for pretty-printers
specification shall be briefly introduced.
------------------------------
Date: Mon, 23 Jun 86 12:12:07 edt
From: rba@petrus.bellcore.com (Robert B. Allen)
Subject: Conference - ACM Conference on Office Information Systems
ACM CONFERENCE ON OFFICE INFORMATION SYSTEMS
October 6-8, 1968, Providence, R.I.
Conference Chair: Carl Hewitt, MIT
Program Chair: Stan Zdonik, Brown University
Keynote Speaker: J.C.R. Licklider, MIT
Distinguished Lecturer: A. van Dam, Brown University
Panels and Sessions
Advanced Computational Models
AI in the Office
Impacts of Computer Technology on Employment
Organizational Analysis: Due Process
Future Directions in Office Technology
Comparison of Social Research Methods
Organizational Analysis: Organizational Ecology
Models of the Distributed Office
Interfaces
For more information, call the Conference Registrar at Brown U.
(401-813-1839), or send electronic mail to mhf@brown.CSNET.
------------------------------
End of AIList Digest
********************
∂01-Jul-86 1558 LAWS@SRI-AI.ARPA AIList Digest V4 #160
Received: from SRI-AI.ARPA by SU-AI.ARPA with TCP; 1 Jul 86 15:57:47 PDT
Date: Tue 1 Jul 1986 10:08-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #160
To: AIList@SRI-AI
AIList Digest Tuesday, 1 Jul 1986 Volume 4 : Issue 160
Today's Topics:
Queries - Expert Systems for Classification & Intelligent Databases &
Constraint-Propagation Inference Engines,
Education - Special J. Instructional Science Issue,
Review - The Evidence of the Senses,
Philosophy - Creativity and Analogy
----------------------------------------------------------------------
Date: 27 Jun 1986 10:55-PDT
From: balaji@usc-cse.usc.edu
Subject: Expert systems for evolutionary classification of fish
I am sending this message for a friend. I will forward replies to her.
Thanks.
Balaji
I am writing a LISP program to classify fish and determine their
evolutionary history. This involves nesting species of fish into a
hierarchical taxonomic arrangement, on the basis of characteristics shared
among the species. My program will need to determine whether characteristics
are primitive or advanced, in order to determine how they fit into the
taxonomic hierarchy. This requires some degree of heuristic
reasoning and this part of the program will probably be constructed as a
mini expert system. The nesting process (i.e. arranging the species in a
taxonomic hierarchy) is a more or less standard procedure, but requires
flexibility to utilize heuristics in some cases.
To get a better idea of how to go about writing my program, I would
like to find out about existing programs that deal with issues similar
to mine. If anyone has any suggestions about where I might find AI
systems that might help me, or papers on such systems, I would be most
appreciative if they send me a message.
Thank you.
Noelle Sedor
------------------------------
Date: Sun, 29 Jun 86 23:27:21 CDT
From: wucs!wucec2!grs0473@seismo.CSS.GOV (Guillermo Ricardo Simari)
Subject: Intelligent Databases
If you could add only one feature to a commercial relational
DBMS in order to make it more "intelligent",
what would be your choice?
If I got enough answers I'll post a summary.
+------------------------------------------------+
| ihnp4!cuae2!ltuxa!we53!wucs!wucec2!grs0473 |
| |
| Guillermo R. Simari |
| P.O.Box 3257 |
| Saint Louis, MO 63130-0657 |
+------------------------------------------------+
------------------------------
Date: 30 Jun 86 09:27 EDT
From: Siems @ DCA-EMS
Subject: constraint propagation inference engines
jerry feinstein and david bailey of booz, allen, and hamilton
are interested in any work being done in the area of constraint
propagation that might be applied to inference engines. the specific
interest is in the use of tight constraints and a non-optimizing,
simplex-like algorithm to find a quick, "satisficing" solution in
an ordered, though not necessarily numeric, problem space. this is
a follow-up on the constraint propagation workshop held at the
expert systems conference in avignon in april of this year. any
information on current work in this area or on the use of constraint
propagation in inference engines would be greatly appreciated.
thank you.
david bailey
booz, allen, and hamilton
4330 east west highway
bethesda, md 20014
(301)951-2155
------------------------------
Date: Wed, 25 Jun 86 09:50:07 edt
From: Bob Lawler <rwl1%gte-labs.csnet@CSNET-RELAY.ARPA>
Subject: Notice of journal issue
[Forwarded from the AI-Ed Digest by Laws@SRI-AI.]
Dear Colleagues,
Today I received from Elsevier a special issue of the Journal of
Instructional Science on the theme of "AI and Education". This double-
number volume (several hundred pages in length) was prepared by
Masoud Yazdani (University of Exeter) and myself (Bob Lawler) as a
preliminary collection of articles prepared for the Second International
Conference on AI and Education held at Exeter University in September
1985. The issue is Volume 14, Nos. 3 and 4, dated May, 1986. A more
comprehensive book on the theme will be forthcoming at the end of 1986.
The contents of the special issue are as follows:
M. Yazdani and R. Lawler AI and Education: an overview
A. DiSessa Artifical Worlds and Real Experience
W. Feurzeig Algebra Slaves and Agents in a Logo-based
Mathematics Curriculum
R. Lawler and G. Lawler Computer Microworlds and Reading
H. Lieberman An Example Based Environment for Beginning
Programmers
S. Ohlsson Some Principles of Intelligent Tutoring
J. Self The Application of Machine Learning to
Student Modelling
A. Priest Solving Problems in Newtonian Mechanics
G. Drescher Genetic AI: translating Piaget to Lisp
K. Carley Knowledge Acquisition as a Social Phenomenon
If you are interested in having a copy of this journal, write to:
Elsevier Science Publishers
Science and Technology Division
P.O. Box 330
1000 AH Amsterdam
The Netherlands
The price for this double-issue of the journal is $57.25, which
includes air transport to the US and surface mail on the continent.
Bob Lawler
(LAWLER at GTE-LABS on CSNET)
(LAWLER at MIT-OZ through MIT-MC on ARPANET)
------------------------------
Date: Sat, 28 Jun 86 20:41:12 -0200
From: Eyal mozes <eyal%wisdom.bitnet@WISCVM.ARPA>
Subject: Book review: "The Evidence of the Senses"
"The Evidence of the Senses" by David Kelley
Louisiana State University Press, 1986, 262 pp., $27.50
"The Evidence of the Senses: a Realist Theory of Perception" is a
comprehensive philosophical treatment of perception, integrating
classical and recent work in philosophy and psychology. To those who
agree with its conclusions, it offers a sound, detailed framework for
psychological, biological and AI work in perception; to those who
don't, it offers an illuminating, profound and thought-provoking
alternative theory.
Dr. Kelley is formerly an assistant professor of philosophy and member
of the Cognitive Science program at Vassar College, and currently a
senior research fellow of the Ayn Rand Institute. His work is based on
the philosophy of Objectivism.
Almost all contemporary work in the theory of perception, including the
writings of philosophers, is devoted to detailed consideration of
specific issues, while taking for granted a wider context of basic
philosophical assumptions. In sharp contrast to this procedure, Dr.
Kelley makes his own basic assumptions fully explicit, defends them on
general philosophical grounds, and only then applies them to specific
issues. This makes it possible for him, when arguing against opposing
views, to argue in terms of essentials by recognizing the basic - often
hidden - assumptions on which these views and the arguments for them
rely.
A central theme of the book is the rejection of the "diaphanous model
of awareness" - the view that awareness of objects can't be mediated by
any process whose nature affects the way the objects appear; Dr.
Kelley demonstrates that this model has been accepted, explicitly or
implicitly, by almost all philosophers of perception since Kant, and it
is the root of all three common views of perception: naive realism,
which claims that our sensory apparatus is indeed diaphanous, and has
no effect on the appearance of external objects; representationalism,
which claims that we don't perceive external objects, but internal
representations which give information about these objects; and
idealism, which denies the existence of external objects.
Chapter 1 sets up the general epistemological framework for the book;
Dr. Kelley contrasts the diaphanous model with his own basic
assumption, "the primacy of existence" - the principle that
consciousness is the faculty of perceiving existence - which dispenses
with the need for making any prior assumptions about how consciousness
"should" work.
Chapters 2 through 5 apply this principle to perception. Chapter 2
deals with the relation between perception and sensation; Dr. Kelley
challenges the "sensationalist" approach - including its modern
"computational" version - which claims that perception is a process of
inference on sensations; he provides philosophical support for James
Gibson's theory of "direct perception" - which holds that external
objects are perceived directly, and that perception is a distinct form
of awareness, not composed out of sensation - and answers the major
criticisms against Gibson.
Chapter 3 treats the relation of an object to its sensory qualities.
The treatment is based on Ayn Rand's concept of "form of awareness",
which designates all perceived qualities which are relative to the
perceiver, distinguishing them from the perceived object and its
intrinsic properties; Dr. Kelley uses this concept to demonstrate the
consistency of perceptual relativity with direct realism, and
illustrates the principle in a discussion of visual illusions and in a
detailed treatment of colors; he then treats in this framework the
traditional distinction of primary vs secondary qualities.
Chapter 4 uses the principles established in previous chapters to
answer the major arguments for representationalism; this includes a
discussion of hallucinations and their relation to perception.
Chapter 5 concludes the discussion of perception by giving a full
definition - "perception is direct awareness of discriminated entities
by means of patterns of energy absorption by sense receptors" - and
discussing in detail each element in the definition and its
implications for each of the five senses.
Chapters 6 and 7 deal with perceptual knowledge, and the role of
perception as the base of conceptual knowledge. Chapter 6 discusses the
two common theories about the nature of justification: the
"foundational" theory, which holds that propositions about experiential
states are self-justifying and provide the foundation on which all
other knowledge is built as a hierarchy; and the "coherence" theory,
which holds that no single proposition can be justified outside the
context of the rest of a man's knowledge, and that the only way to
justify knowledge is by its self-consistency. Dr. Kelley identifies and
challenges the common premise implicit in both these positions - "the
propositional theory of justification", which holds that the only way
to justify a proposition is by inference from other propositions.
Chapter 7 deals with "perceptual judgments" - conceptual
identifications of perceived entities and their attributes. Dr.
Kelley's treatment of this subject is not complete, and he does not
offer a full theory; but he does indicate the direction such a theory
should take, and its implications for concept-formation. He discusses
the relation between the perceptual discrimination of an entity and the
reference to it in a perceptual judgment; the difference between
"construction" and "discovery" models of concept-formation, and their
relation to the possibility of justifying a perceptual judgment without
need for an inference from other propositions; the implications of
perceptual relativity for forming concepts of sensory qualities; and
the autonomy of perception, answering the various philosophical and
scientific arguments for the claim that perception and perceptual
judgments are affected by previous knowledge or desires.
The book is thoroughly organized, with careful attention to integration
of the various issues and to illustration of the abstract points; the
result is that, despite its highly technical content, it is very
readable. All technical terms are carefully explained, and therefore,
while reading the book will be easier for those with a previous
background in the theory of perception, such a background is not
necessary. The book contains extensive surveys of previous work and of
different views and arguments, with heavy use of references, and this
makes it an ideal starting-point for a study of the subject.
In conclusion, I strongly recommend this book to anyone seriously
interested in the theory of perception, and I think it is a must read
for any psychologist, biologist or AI researcher whose work involves
this subject.
Eyal Mozes
BITNET: eyal@wisdom
CSNET and ARPA: eyal%wisdom.bitnet@wiscvm.ARPA
UUCP: ..!ucbvax!eyal%wisdom.bitnet
------------------------------
Date: Fri, 27 Jun 86 11:23:52 edt
From: Jay Weber <jay@rochester.arpa>
Reply-to: jay@rochester.UUCP (Jay Weber)
Subject: Re: Creativity and Analogy
> Yes, I do want to understand "creativity" in terms of less slippery
>concepts, such as "analogy". We are forced to start with informal
>approaches but hope to find more formal definitions. I do not
>understand why a formal approach would satisfy very few people or
>why an informal approach would serve no useful purpose.
Consider the following view of analogy, consistent with its formal
treatment in many sources. A particular analogy, e.g. that which
exists between a battery and a reservoir, is a function that maps
from one category (set of instances) to another. Equivalently we
can view this function as a relation R between categories, in this
case we have a particular kind of "storage capability". This relation
is certainly
1) reflexive. "A battery is like a battery" (under any relation)
2) symmetric. "A battery is like a reservoir" implies
"A reservoir is like a battery" under the same relation R
3) transitive. "A battery is like a reservoir" and
"A reservoir is like a ketchup bottle" imply
"A battery is like a ketchup bottle" WHEN THE SAME
ANALOGY HOLDS BETWEEN THEM (same R).
Then any analogy R is an equivalence relation, partitioning the space
of categories. Each analogy corresponds to a node in an abstraction
hierarchy which relates all of the sub-categories, SO THE SPACE OF
ANALOGIES MAPS ONTO THE SPACE OF ABSTRACTIONS, and so under these
definitions analogy and abstraction are equivalent.
Now to the point: I recently presented this sketched proof to my peers
and they fought me whenever I tried to say "this is what analogy is"
rather than "this is what I define analogy to be" (with the latter claim
I probably should use a different term like R-analogy or XYZZY). I fact,
no one could agree to a particular formal definition of the term "analogy",
since we all have individual formal definitions by virtue of the fact that
we will answer yes or no when given a potential analogy instance, so we
are formal language acceptors with our senses as input. This is what I
mean by a "slippery" term, i.e. one that has drastically different
meanings depending on its user. This is why I say a formal definition
of analogy would satisfy very few people. Informal definitions are
useless because by defintion there is no notion of a valid inference
from the theory, we cannot make predictions with them and therefore
cannot do science with them (most "loose" defintions of things put
forward do have some formal properties, but one must be careful).
>I am sure that you do not imply that an analysis (formal or informal)
>of >anything< is futile. What is it about "creativity" that makes its
>analysis a no-win proposition?
"Creativity" is VERY slippery, perhaps only slightly less slippery than
"intelligence". Profit by Turing's example and keep your personal
definition of the slippery term in mind but define a new one, e.g.
Turing-test-intelligence instead of asking for a definition of the
word in usage.
Jay Weber
Department of Computer Science
University of Rochester
Rochester, N.Y. 14627
jay@rochester.arpa
------------------------------
End of AIList Digest
********************
∂07-Jul-86 1258 LAWS@SRI-AI.ARPA AIList Digest V4 #161
Received: from SRI-STRIPE.ARPA by SU-AI.ARPA with TCP; 7 Jul 86 12:57:50 PDT
Date: Mon 7 Jul 1986 10:27-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #161
To: AIList@SRI-AI
AIList Digest Monday, 7 Jul 1986 Volume 4 : Issue 161
Today's Topics:
Conferences - Test and Evaluation Assoc. AI/Expert System Workshop &
Theoretical Issues in Natural Language Processing &
Database Theory 1986 - Program
----------------------------------------------------------------------
Date: Wed, 2 Jul 86 21:07 EDT
From: HCGRS%clemson.csnet@CSNET-RELAY.ARPA
Subject: Conference - Intl. Test & Evaluation Assoc. AI/Expert System Workshop
INTERNATIONAL TEST AND EVALUATION ASSOCIATION
Artificial Intelligence/Expert System WORKSHOP
AI/ES WORKSHOP PROGRAM
July 8-10, 1986
George Washington University
PHONE REGISTRATION (703) 893-0228
Tuesday, 8 July
Morning Session
0830-0840 - Welcome John Bolino, President ITEA
0840-0850 - Welcome President, GWU
0850-0900 - Admin (if any) Henry Alberts, ITEA Staff
0900-0940 - KEYNOTE Barry Silverman, GWU
0940-0950 - Coffee Break
0950-1030 - TIMS Dr. Peter McWhite, GRC
1030-1110 - TACTICAL AI Dr. Stuart Brodsky, Sperry
1110-1145 - APPLICATION OF Dan McDonough, USAF-AFOTEC
AI TO OT&E
Luncheon at the University Club, Marvin Center
Afternoon Session
1330-1630 - 30-40 person groups "Hands-On" sessions with
morning speakers and their systems.
Wednesday, 9 July
Morning Session
0830-0840 - Announcements Henry Alberts, ITEA Staff
0840-0920 - AUTO SWITCHING Ms. Marquerite Denocourt,
BELLCOM
0920-1000 - TESTPRO Dr. Anthony Mucciardi,
Infomatics
1000-1020 - Coffee Break
1020-1100 - NATC TECHMAN Mr. George Hurlburt,USN/NATC PAX
Dr. Joel Simkol, GWU
Luncheon at the University Club, Marvin Center
Afternoon Session
1330-1630 - 30-40 person groups "Hands-On" sessions with
morning speakers and their systems.
Thursday, 10 July
Morning Session
0830-0840 - Announcements Henry Alberts, ITEA Staff
0840-1145 - Panel Discussion
Panelists: Charles K. Watt Georgia Tech
Richard A. Demilo Georgia Tech
Barry Silverman GWU
H. Steve Kimmel ODUSDRE(T&E)
0840-0940 - Panel Opening Statements
0940-1010 - Coffee Break
1010-1145 - Open Discussion between Panel and Audience
1140-1200 - Closing Remarks John Bolino, President ITEA
Luncheon at the University Club, Marvin Center
Afternoon Session
1330-1445 - 30-40 person groups "Hands-On" sessions with
speakers and their systems.
Evening Session
1500-1700 - Movie & Tour - Air & Space Museum, Museum Staff
1730-1930 - General Discussion - U.S. Senate Caucus Room
Hon. John Warner, Senate Armed Services
Committee -- Sponsor
-- Dr. Harold C. Grossman
Dept. of Computer Science
Clemson University
Clemson, SC
hcgrs@clemson.csnet
------------------------------
Date: Thu, 3 Jul 86 15:21:59 mdt
From: yorick%nmsu.csnet@CSNET-RELAY.ARPA
Subject: Conference - Theoretical Issues in Natural Language Processing
CC C R R R L C O M P U T I N G
CC R R L R E S E A R C H
C R R L L A B O R A T O R Y
CC R R L
CC R R L Box 3 CRL
CC C R R LLLLLLLL NMSU, Las Cruces 88003
Tinlap3
January 7,8,9, 1987
Tinlap3 will be the third in the series of interdisciplinary workshops
Theoretical Issues in Natural Language Processing.
The format will be as in MIT(1985) & Illinois (1978): invited panels
of distinguished figures in the field will discuss pre-circulated
statements of position. Lively audience participation is anticipated.
The panels are intended to cover the major contentious issues of the
moment.
Tinlap3 is being supported by the Association of Computational Linguistics
and funds are also being sought from NSF, AAAI, and ACM.
Tinlap Grand Committee:
Nick Cercone (Simon Fraser University),
Richard Rosenberg (Dalhousie University),
Roger Schank (Yale University),
David Waltz (Brandeis University),
Bonnie Webber (University of Pennsylvania).
Tinlap3 General Chair: Andrew Ortony (University of Illinois)
Tinlap3 Program Chair: Yorick Wilks (New Mexico State University)
Panels and their Chairs will be:
* Connectionist and other parallel approaches to natural language processing
(Dave Waltz, Thinking Machines & Brandeis)
* Unification and the new grammatism
(Fernando Pereira, SRI)
* World and world representations
(Don Walker, Bellcore)
* Formal versus commonsense semantics
(Yorick Wilks, NMSU)
* Why has theoretical NLP made so little progress?
(to be confirmed)
* Discourse theory and speech acts.
(Barbara Grosz, SRI)
* Reference: the interaction of language and the world
(Doug Appelt, SRI)
* Metaphor
(Derdre Gentner, U.Illinois)
* Natural language generation
(Aravind Joshi, U. Pennsylvania)
Registration:
Registration covers pre-circulated preprints, mid-session refreshments etc.,
some local transportation, and adminstration.
Registration fees: Non-student: $50 ($40 if registered before Aug. 20, 1986)
Full-time students: $30 ($25 if registered before Aug. 20, 1986)
Registration Form: [Deleted -- contact author for copy. -- KIL]
Registrants should fill out and print out form, sign and send hardcopy
with check made payable to NMSU Foundation to
Tinlap3,
Box3CRL, NMSU, Las Cruces, NM 88003.
Sending a copy of your registration by return netmail will also assure its
quick entry to mailouts of further materials.
Where: at New Mexico State University main campus (Las Cruces), Rio Grande
Corridor for Technical Excellence, Computing Research Lab.
(505-646-5466) for further details.
Forming the western corner of a triangle with White Sands and El Paso,
Las Cruces is a city of about 50,000 people in southern NM. Las Cruces is
situated between the spectacular Organ Mountains fifteen miles to the east,
and the historic Rio Grande River to the west. Two miles west of Las Cruces,
near the Rio Grande, is La Mesilla, the old Mexican village where the Gadsden
Purchase was signed. The town square is bordered by restaurants and shops,
with Indian arts -- pottery, paintings, jewelry, baskets, and weaving.
Also nearby are the White Sands National Monument (about 55 miles),
the Carlsbad Caverns (about 160 miles), and Sierra Blanca, a 12,000 foot
mountain with fine skiing (about 130 miles).
The weather in early January is usually clear and sunny, with temperatures
usually in the 50's in the daytime, and the 20's at night. Good skiing is
one and a half hours away.
Note:
Full program will be mailed to all registrants in September and
the preprints in December. Detailed accommodation and travel information
will be sent on receipt of completed registration form.
Hotel rates will be from $20-$50 per night.
Since accommodation may be limited, to obtain
hotel information, it is advisable to register early.
------------------------------
Date: Thu, 3 Jul 86 10:01:48 -0200
From: Moshe Vardi <vardi%wisdom.bitnet@WISCVM.ARPA>
Subject: Conference - Database Theory 1986 - Program
International Conference on Database Theory
PROGRAM
MONDAY, SEPTEMBER 8
Registration and coffee: 8:00am-10:30am
Session 1. 10:30am-1:00pm. Chairperson: Giorgio Ausiello
Database Queries and Programming Constructs (Invited Lecture), Ashok
K. Chandra (IBM T.J. Watson Research Center, USA)
Presentation of the Witold Lipski Award to V.S. Lakshmanan.
Split-Freedom and MVD-Intersection: A New Characterization of
Multivalued Dependencies Having Conflict-Free Covers, V. S. Lakshmanan
(Indian Institute of Science, India)
A Polynomial-time Join Dependency Implication Algorithm for Unary
Multi-valued Dependencies, George Loizou (Birkbeck College, Univ. of
London, UK), P. Thanisch (Lattice Logic, UK)
Horizontal Decompositions Based on Functional-Dependency-Set-
Implications, Paul De Bra (University of Antwerp UIA, Belgium)
Luncheon: 1:00pm-2:30pm
Session 2. 2:30pm-4:00pm. Chairperson: TBA
Introduction to the Theory of Nested Transactions, Nancy A. Lynch
(MIT, USA), Michael Merritt (ATT Bell Laboratories, USA)
The Cost of Locking, Peter K. Rathmann (Stanford University, USA)
Update Serializability in Locking R. C. Hansdah, L. M. Patnaik
(Indian Institute of Science, India)
Coffee Break: 4:00pm-4:30pm.
Session 3. 4:30pm-6:00pm. Chairperson: John Mylopoulos.
Restructuring of Semantic Database Objects and Office Forms, Serge
Abiteboul (INRIA, France), Richard B. Hull (University of Southern
California, USA)
Entity-Relationship Consistency for Relational Schemas, Johann A.
Makowsky, Victor M. Markowitz, N. Rotics (Technion, Israel)
Unsolvable Problems Related to the View Integration Approach,
Bernhard Convent (Universitaat Dortmund, Fed. Rep. of Germany)
TUESDAY, SEPTEMBER 9
Session 4. 9:00am-10:45am. Chairperson: Domenico Sacca`
Logic Programming and Parallel Complexity (Invited Lecture), Paris
Kanellakis (Brown University, USA)
Updating Logical Databases Containing Null Values, Marianne Winslett
Wilkins (Stanford University, USA)
Update Semantics under the Domain Closure Assumption, Laurence Cholvy
(ONERA-CERT-DERI, France)
Coffee Break: 10:45am-11:15am
Session 5. 11:15am-12:45pm. Chairperson: Jan Paredaens
On the Desirability of Gamma-Acyclic BCNF Database Schemes, Edward
P.F. Chan, Hector J. Hernandez (University of Alberta, Canada)
Set Containment Inference, Paolo Atzeni (IASI-CNR, Italy), D. Stott
Parker (UCLA, USA)
Interaction-Free Multivalued Dependency Sets, Dirk Van Gucht (Indiana
University, USA)
Luncheon: 12:45pm-2:30pm
Session 6. 2:30pm-4:00pm. Chairperson: TBA
Efficient Multidimensional Dynamic Hashing for Uniform and Non-Uniform
Record Distributions, Hans-Peter Kriegel, Bernhard Seeger
(Universitaat Wuerzburg, Fed. Rep. of Germany)
List Organizing Strategies Using Stochastic Move-to-Front and
Stochastic Move-to-Rear Operations, B. John Oommen (Carleton
University, Canada), E. R. Hansen (Lockheed Missiles and Space Co.,
USA)
Coffee Break: 3:30pm-4:00pm.
Session 7. 4:00pm-5:30pm. Chairperson: TBA
A Domain Theoretic Approach to Higher-Order Relations, Peter Buneman
(University of Pennsylvania, USA)
Theoretical Foundation of Algebraic Optimization Utilizing
Unnormalized Relations, Marc H. Scholl (Technische Hochschule
Darmstadt, Fed. Rep. of Germany)
Modelling Large Bases of Categorized Data with Acyclic Schemes, F. M.
Malvestuto (ENEA, Italy)
Banquet: 8:00pm-11:00pm
WEDNESDAY, SEPTEMBER 10
Session 8. 9:00am-10:45am. Chairperson: TBA
The Generalized Counting Method for Recursive Logic Queries (Invited
Lecture), Carlo Zaniolo (MCC, USA)
Some Extensions to the Closed World Assumption in Databases, Shamim
A. Naqvi (MCC, USA)
Query Processing in Incomplete Logical Databases, Nadine Lerat
(Universite` de Paris-Sud, France)
Filtering Data Flow in Deductive Databases, Michael Kifer (SUNY at
Stony Brook, USA), Eliezer L. Lozinskii (Hebrew University, Israel)
Coffee Break: 11:15am-11:45am
Session 9. 11:45am-12:45pm. Chairperson: TBA.
A New Characterization of Distributed Deadlock in Databases, Ouri
Wolfson (Technion, Israel)
Towards Online Schedulers Based on Pre-Analysis Locking, Georg Lausen
(Technische Hochschule Darmstadt, Fed. Rep. of Germany), Eljas
Soisalon-Soininen (University of Helsinki, Finland), Peter Widmayer
(Universitaat Karlsruhe, Fed. Rep. of Germany)
PROGRAM COMMITTEE
S.Abiteboul (France); G.Ausiello (Italy), chairman; F.Bancilhon
(France, USA); A.D'Atri (Italy); M.Moscarini (Italy); J.Mylopoulos
(Canada); J-M.Nicolas (France, West Germany); J.Nievergelt
(Switzerland); C.H.Papadimitriou (Greece, USA); J.Paredaens (Belgium);
D.Sacca` (Italy); N.Spyratos (France); J.D.Ullman (USA); M.Y.Vardi
(USA).
REGISTRATION
Registration, except for students, includes technical sessions, one
copy of the preprints of the proceedings, luncheons (Monday and
Tuesday), banquet (Tuesday), and refreshments during the coffee breaks.
Student registration is available to full-time students only, and must
be documented by a faculty member certification or photocopy of student
card, and includes the technical sessions, preprints and refreshments.
Registration fee:
Before Aug.15 After Aug.15
Member of IEEE or EATCS: Lit. 180000 [ ] 250000 [ ]
US $ 120 [ ] 165 [ ]
Nonmember: Lit. 200000 [ ] 270000 [ ]
US $ 135 [ ] 180 [ ]
Student: Lit. 75000 [ ] 100000 [ ]
US $ 50 [ ] 65 [ ]
[...]
GENERAL INFORMATION
LOCATION: Conference activities will take place in the headquarters of the
Italian Research Council, in front of the main campus of the University
of Rome "La Sapienza":
CNR: Consiglio Nazionale delle Ricerche
Piazzale Aldo Moro 7
MAIL AND MESSAGES: The official mailing address of ICDT'86 is:
ICDT'86 c/o Paolo Atzeni
IASI-CNR
Viale Manzoni 30
00185 Roma Italy
Telephone (before the conference) +39 (6) 770031
(during the conference) +39 (6) 4993379
Telex: 610076 CNRRM I (Attention: Dr. Atzeni IASI)
During the conference, participants can receive mail at the above
address, but are suggested to have telephone messages directed to the
respective hotels.
TRANSPORTATION: Aeroporto Leonardo Da Vinci, Fiumicino, is Rome
International Airport. ACOTRAL buses leave the airport every 20 or 30
minutes for the downtown air terminal, located in Via Giolitti, at the
main railway station (Stazione Termini). The hotels are within walking
distance from the terminal (300mt). ACOTRAL costs Lit.6000 (about US $
4.00), and tickets must be bought within the airport, before boarding
the bus. Taxi fare from the airport to downtown is about Lit.45000
(about US $ 30) (authorized taxi cabs are yellow and have a license
number; use only yellow taxis and ask for a receipt).
Detailed information on how to get to the conference site (1500 mt.
from the hotels) will be available at the hotels.
BANQUET: The conference banquet will be held at Hotel Columbus, (Via della
Conciliazione 33, near the Vatican). Vegetarian meals will be available
only to preregistrants requesting them. Additional tickets for the
banquet will be available at the registration desk for Lit.50000.
TRAVEL INFORMATION: American Express offers various half-day tours of Rome
every day, in the morning and in the afternoon, for about Lit.
30000-35000 (US $ 20 - 23), and one or two days tours to other
interesting locations. Information requests to American Express can be
sent together with hotel reservations.
CLIMATE: Weather in Rome in September is quite warm, with temperatures
between 25 and 30 degrees C (77 - 86 degrees F).
THINGS TO SEE AND TO DO: Anything you like; the decision problem may be
unsolvable.
The organizers of ICDT'86 would like to thank the following financial
supporters.
- Banca Nazionale del Lavoro
- Consiglio Nazionale delle Ricerche
- Enidata S.p.A.
- Selenia S.p.A.
- Universita` di Roma "La Sapienza"
------------------------------
End of AIList Digest
********************
∂07-Jul-86 1531 LAWS@SRI-AI.ARPA AIList Digest V4 #162
Received: from SRI-STRIPE.ARPA by SU-AI.ARPA with TCP; 7 Jul 86 15:31:09 PDT
Date: Mon 7 Jul 1986 10:36-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #162
To: AIList@SRI-AI
AIList Digest Monday, 7 Jul 1986 Volume 4 : Issue 162
Today's Topics:
Queries - Teaching CommonLisp & CPROLOG on VAX/VMS &
Architectures for Interactive Systems,
AI Tools - Scheme and CommonLisp
Philosophy & Brain Theory - Representationalist Perception,
Natural Language - References,
Journals - AI Expert
----------------------------------------------------------------------
Date: Wed 2 Jul 86 09:40:53-PDT
From: Mark Richer <RICHER@SUMEX-AIM.ARPA>
Subject: Teaching CommonLisp
Lisp Teachers (or previous learners),
I am interested in collecting comments regarding your experiences and
preferences with texts for teaching/learning Lisp. The implementation
available for the specific course I expect to be teaching is Golden
CommonLisp on an IBM-PC, in case you want to factor that into your
comments. You might want to think of this in the traditional way,
which book(s) would you make required, highly recommended, or
optional. Any other comments on teaching Lisp would be of interest.
The students in this class will range from undergraduates that are
novice programmers majoring in fields outside of CS, Math, or the
sciences to CS majors and possibly graduate students. This course is
not being taught in a Computer Science Department and little
constraints have been placed on me. Hands-on lab sessions are
possible as well as lectures.
Mark
------------------------------
Date: 3 Jul 86 13:37:00 EST
From: "CPT.GREG.ELDER" <elder@wpafb-info1.ARPA>
Reply-to: "CPT.GREG.ELDER" <elder@wpafb-info1.ARPA>
Subject: Help with CPROLOG on VAX/VMS
Please excuse me if this is not the appropriate list for this message.
I am looking for anyone running CPROLOG on a VAX under VMS 4.2. We
have a problem when typing CONTROL-C under CPROLOG to enter the debug
mode so as to be able to turn on tracing. If anyone has CPROLOG
running successfully on a VAX under VMS 4.2, I would appreciate
hearing from you.
Thanks.
Greg Elder
ARPA: elder@wpafb-info1
CSNET: gelder@wright
------------------------------
Date: Thu, 3 Jul 86 18:03:11 edt
From: brant%linc.cis.upenn.edu@CIS.UPENN.EDU
Subject: Architectures for interactive systems?
There seems to have been a great deal of work done in
natural language processing, yet so far I am unaware of
any attempt to build a practical yet theoretically well-
founded interactive system or an architecture for one.
When I use the phrase "practical yet theoretically well-
founded interactive system," I mean a system that a user
can interact with in natural language, that is capable of
some useful subset of intelligent interactive (question-
answering) behaviors, and that is not merely a clever hack.
Many of the sub-problems have been studied at least once.
Work has been done on various types of necessary response
behavior, such as clarification and misconception correction.
Work has been done on parsing, semantic interpretation, and
text generation, and other problems as well. But has any
work been done on putting all these ideas together in a
"real" system? I see a lot of research that concludes with
an implementation that solves only the stated problem, and
nothing else. Presumably, a "real user" will not want to
have to run system A to correct invalid plans, system B to
answer direct questions, system C to handle questions with
misconceptions, and so forth.
I would be interested to get any references to work on such
integrated systems. Also, what are people's opinions on this
subject: are practical NLP too hard to build now? Should we
leave the construction of practical systems to private enter-
prise and restrict ourselves to the basic research problems?
If we do so, how can we be sure we're actually making any
contribution at all?
Brant
====================
Brant Cheikes
Department of Computer and Information Science
University of Pennsylvania
ARPA: brant@linc.cis.upenn.edu
CSNET: brant%upenn-linc@upenn
------------------------------
Date: Wed 2 Jul 86 09:26:53-PDT
From: Mark Richer <RICHER@SUMEX-AIM.ARPA>
Subject: Scheme and CommonLisp
SCHEME and COMMONLISP
*********************
On June 10th, 1986 I sent out a request for feedback on the language Scheme.
In particular, I was interested in how appropriate the language would be
for a large-scale development effort in ICAI versus Commonlisp. Implicit
in this question are concerns about available implementations including
development environments, efficiency, compactness, ease of learning,
portability, etc. Below is a summary of comments. If you want to see the
whole file of messages (13) I will send it to you upon request.
Advantages of Scheme (compared to other Lisps including Commonlisp):
********************************************************************
Simple
Consistent
Small (easy to learn and can be implemented well on small, standard machines)
Elegant
Semantics of language are clean
Closures and lexical scoping are handled well
Migration to (i.e., learning) other dialects of Lisp should not be a problem
Portable (but someone has to have implemented it on the target machine)
Supports object-oriented programming and multiple processes
For above reasons, it is very appropriate for learners, especially if the
goal is to teach basic principles in computer science
A net address to reach experts: SCHEME-TEAM%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Advantages of Commonlisp
************************
Widely accepted standard
Large, growing user community
Lisp language development is concentrated on Commonlisp now
Best programming environments do/will support Commonlisp
Many built-in functions and features (overload beginner, but very useful)
Portable (obvious reasons to expect good implementations, compilers, etc.)
Commonlisp does require more memory than Scheme, but given the
increasing availability of inexpensive large memories that issue might
vanish.
There is a Commonlisp mailing list, Common-Lisp@SU-AI.ARPA. I assume
you need to contact Common-Lisp-Request@SU-AI.ARPA to get on the list,
unless you have access to it through a local bboard.
Other comments
--------------
Scheme IS a dialect of Lisp, an UnCommonLisp though.
Proust, an ICAI program, is implemented in T, a dialect of Scheme.
Ableson and Sussman's "Structure and Interpretation of Computer
Programs" (MIT Press, 1985) is highly recommended for everyone to read
and is also suggested as a text to teach computer science (Scheme is
the language used throughout the book).
------------------------------
Date: Wed 2 Jul 86 11:07:50-PDT
From: Pat Hayes <PHayes@SRI-KL>
Subject: Representationalist Perception
Mozes long review of Kelleys book "The Evidence of the Senses" tells one
a lot about the book. In particular, it sounds as though it makes the same
basic mistake about representations that many other 'anti-computationalist'
philosophers , including Gibson and his followers, make. The
'representatonalist' account of perception does NOT claim that instead of
perceiving the world, we perceive internal representations of the world.
That would indeed be a position with many difficulties. Rather, it says
that the WAY we perceive the world is BY making representations of it.
The data structures are, to put it simply, the output of the perceptual
process, not its input. The question the representational position must
face is how such things ( representations ) can serve as percepts in the
overall cognitive framework. While there are indeed many problems here,
the position is not as silly as Gibson thought it was.
Pat Hayes
------------------------------
Date: 1 Jul 86 14:08:00 PST
From: sefai@nwc-143b.ARPA
Reply-to: <sefai@nwc-143b.ARPA>
Subject: Natural Language References
As promised, the following is a list of references on Natural
Language. I'd like to thank all who contributed references as well as
suggestions. Before I can definitely commit to my topic, I need to
investigate work done by Harris and Wiley, Sager, and Winograd. Hopefully,
I'll nail this down before summer's end. Will keep you posted.
Gene Guglielmo
SEFAI@NWC-143B
China Lake, Ca.
.rA Ananiashviii, G.G.
.rA Mundzhishvii, Z.I.
.rA Bichashvii, N.N.
.rP Word Identification in a Natural Language in Interactive Systems
.rC Soobshch. Akad. Nauk. Gurzin. SSR
.rD 1984
.rA Boguraev, B.K.
.rA Jones, K.S.
.rP A Framework for Inference in Natural Language Front Ends to Databases
.rI University of Cambridge Computer Laboratory
.rC Report No. 64
.rD 1985
.rA Brachman, Ron (ed)
.rA Levesque, Hector (ed)
.rB Readings in Knowledge Representation
.rI Morgan Kaufmann Publishers, Inc.
.rW Palo Alto, California
.rD 1986
.rA Briggs, R.
.rP Transcendental Semantic Primitives for Natural Language Processing
.rI Research Institute for Advanced Computer Science, NASA Ames
Research Center
.rC RIACS Techical Report TR 85.14
.rW Moffett Field, California
.rD 1985
.rA Damerau, F.J.
.rP An Interactive Customization Program for a Natural Language Database
Query System
.rI IBM Research Division
.rC Report No. 10411
.rD 1984
.rA Damerau, F.J.
.rP Problems and Some Solutions in Customization of Natural Language Data
Base Front Ends
.rI IBM Research Division
.rC Report No. 10872
.rD 1984
.rA Dyer, M.G.
.rB In-Depth Understanding: A Computer Model of Integrated Processing
for Narrative Comprehension
.rI MIT Press
.rW Cambridge, Massachusetts
.rD 1986
.rA Enomoto, H.
.rP TELL: a Natural Language Based Software Development System
.rI Institute for New Generation Computer Technology
.rC Report No. 67
.rD 1984
.rA Findler, Nicholas V. (ed)
.rB Associative Networks: Representation and Use of Knowledge by Computers
.rI Academic Press
.rW NY
.rD 1983
.rA Frederking, R.E.
.rP Syntax and Semantics in natural Language Parsers
.rI Carnegie-Melon University
.rC Department of Computer Science
.rC Report No. 85-133
.rD 1985
.rA Harris, M.D.
.rB Introduction to Natural Language Processing
.rA Harris, Z.
.rA Wiley
.rB A Grammar of English on Mathematical Principles
.rD 1984
.rA Ibragimov, T.I.
.rB Cybernetics and Natural Languages
.rA Jacobs, P.S.
.rP PHRED: A Generator for Natural Language Interfaces
.rI University of California
.rC Berkeley Computer Science Division
.rC Report No. 85-198
.rD 1985
.rA Johnson, D.E.
.rP Design of a Robust, Portable Natural Language Interface Grammar
.rI IBM Research Division
.rC Report NO. 10867
.rD 1984
.rA Johnson, T.
.rB Natural Language Computing: The Commercial Applications
.rI Ovum Limited
.rW London
.rA Kalita, J.K.
.rP Generating Summary Responses to Natural Language Database
.rI University of Saskatchewan
.rC Report No. 84-9
.rD 1984
.rA Kandrirody, A.
.rA Kapur, D.
.rA Narendran, P.
.rB An Ideal-Theoretic Approach to Word Problems and Unification Problems
over Finitely Presented Commutative Algebras
.rA Karpen, J.L
.rP The Digitized Word: Orality, Literacy, and the Computerization of
Language
.rC Ph.D. thesis
.rI Bowling Green State University
.rW Bowling Green, Ohio
.rD 1984
.rA Marcus, M.P.
.rB A Theory of Syntactic Recognition for Natural Language
.rI MIT Press
.rW Cambridge, Massachusetts
.rD 1985
.rA Mays, E.
.rP A Modal Temporal Logic for Reasoning About Changing Database with
Applications to Natural Language Question Answering
.rI Unviersity of Pennsylvania
.rC Moore School of Electrical Engineering
.rC Department of Computer Science
.rC Report No. 85-01
.rD 1985
.rA Michalski, R.S.
.rA Carbonell, J.G.
.rA Mitchell, T.M.
.rB Machine Learning; An Artificial Intelligence Approach, Volume II
.rI Morgan kaufman Publishers, Inc.
.rW Palo Alto, California
.rD 1986
.rA Neuamnn, B.
.rP Natural Language Descriptions of TIme-Varying Scenes
.rI Universitaet Hamburg.
.rC Fachbereich Informatik
.rC Report NO. 105
.rD 1984
.rA Orlowska, E.
.rP The Montague Formalization of Natural Language
.rI Polish Academy of Sciences
.rC Institute of Computer Sciences
.rC Report No. 105
.rD 1984
.rA Petrick, S.R.
.rP Natural Language Database Query Systems
.rI IBM Research Division
.rC Report No. 10508
.rD 1984
.rA Rau, L.F.
.rP The Understanding and Generation of Ellipses in a Natural Language
Systems.
.rI University of California Berkeley
.rC Computer Science Division
.rC Report No. 85-227
.rD 1984
.rA Sager, Naomi
.rB Natural Language Information Processing
.rI Addison-Wesley
.rW Reading
.rA Saint-Dizier, P.
.rP An Approach to natural Language Semantics in Logic Programming
.rI Institute National de Recherce en Informatique et en Automatique
.rC Report NO. 389
.rA Salton
.rA McGill
.rB Introduction to Modern Information Retrieval
.rA Schank, R.C (ed)
.rA Colby, K.M. (ed)
.rB Computer Models of Thought and Language
.rI W.H.Freeman and Company
.rW San Francisco
.rD 1973
.rA Schank, R.C.
.rB Conceptual Information Processing
.rI Elsevier Science Publishers B.V.
.rW Amsterdam
.rD 1984
.rA Schank, R.C.
.rA Childers, P.G.
.rB The Cognitive Computer
.rI Addison-Wesley
.rW Reading
.rD 1984
.rA Schieber, Stuart M.
.rP An Introduction to Unification-based Approaches to Grammar
.rI University of Chicago Press
.rC CSLI Lecture Note Series
.rD 1986
.rA Sowa, John F.
.rB Conceptual Structures: Information Processing in Mind and Machine
.rP Addison-Wesley
.rW Reading
.rD 1984
.rA VanRijsbergen
.rB Information Retrieval, 2nd Edition
.rA Winograd, Terry
.rB Language as a Cognitive Process, Volume 1: Syntax
.rI Addison-Wesley
.rW Reading
.rD 1983
.rP Large-Dictionary, On-Line Recognition of Spoken Words
.rI Helsinki University of Technology
.rC PB84-214246/CAO
.rD 1983
.rB Natural Language Processing: A Knowledge Engineering Approach
.rL 0-8476-7358-8
------------------------------
Date: Sat 5 Jul 86 13:07:49-CDT
From: CMP.BARC@R20.UTEXAS.EDU
Subject: AI Expert
Since the new "AI Expert" magazine was given such a glowing review, I thought
the ensuing raft of potential subscribers might be interested that they can do
a bit better than the $27 yearly subscription rate (which includes the premier
and 12 other issues). Recent issues of its sister publication "Computer
Language" include savings certificates that offer the 13-issue package for
$22.
Dallas Webster
CMP.BARC@R20.UTexas.Edu
------------------------------
End of AIList Digest
********************
∂07-Jul-86 1951 LAWS@SRI-AI.ARPA AIList Digest V4 #163
Received: from SRI-STRIPE.ARPA by SU-AI.ARPA with TCP; 7 Jul 86 19:49:32 PDT
Date: Mon 7 Jul 1986 10:41-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #163
To: AIList@SRI-AI
AIList Digest Monday, 7 Jul 1986 Volume 4 : Issue 163
Today's Topics:
AI Tools - The Logix System,
Psychology - Psychnet BBoard,
Games - Hitech Results,
Techniques - Hopfield Networks for Traveling Salesman,
Opinion - Common Sense,
Philosophy - Creativity and Analogy
----------------------------------------------------------------------
Date: Thu, 3 Jul 86 17:03:11 -0200
From: Ehud Shapiro <udi%wisdom.bitnet@WISCVM.ARPA>
Subject: The Logix system
We are pleased to announce the availability of the Logix system, an
experimental Flat Concurrent Prolog program development environment.
Logix can be used to study and experiment with concurrent logic
programming, and to develop applications that can benefit from
combining the expressive power of concurrency with that of the logical
variable.
Logix is not a conventional programming environment; although presently
a single user single processor system, its basic design scales to a
multiprocessor, multiuser system. With its novel approach to parallel
computation control, its concept of active modules and its
object-oriented design of system hierarchies, it is an interesting
system to study in its own right. For the same reason it may be
overdeveloped for the casual user in certain respects (e.g. its
multitasking capabilities), and underdeveloped in others (e.g.
interactive help, "friendliness").
Logix includes an FCP compiler to an abstract machine instruction set
and a C emulator of the abstract machine. With the exception of the
emulator and a few kernels, it is written entirely in Flat Concurrent
Prolog. The system was bootstrapped in Summer 1985, and has seen
extensive use and development since. It was used to develop
applications (including Logix itself) whose total size is over 20,000
lines of FCP source code.
Logix is available on Vax and Sun computers, under the Berkeley Unix
and Ultrix operating systems. It is expected that applications
developed under Logix would run almost directly on a multiprocessor
implementation of Flat Concurrnt Prolog; the availability of such a
prototype system for the Intel iPSC hypercube is announced separately.
The handling fee for a non-commercial license to the Logix system
is $250 U.S. To obtain a license form and/or a copy of the Logix user
manual write to:
Mr Yossef Dabby
Department of Computer Science
The Weizmann Institute of Science
Rehovot 76100, Israel
To obtain an electronic copy of the license write to:
CSnet, Bitnet: logix-request@wisdom
ARPAnet: logix-request%wisdom.bitnet@wiscvm.arpa
References
[1] A. Houri and E. Shapiro, "A sequential abstract machine for Flat
Concurrent Prolog", Weizmann Institute Technical Report CS86-20,
1986.
[2] W. Silverman, M. Hirsch, A. Houri, and E. Shapiro, "The Logix system
user manual, Version 1.21", Weizmann Institute Technical Report
CS86-21.
[3] M. Hirsch, W. Silverman, E. Shapiro, "Layers of protection and
control in the Logix system", Weizmann Institute Technical Report
CS86-19, 1986.
------------------------------
Date: Mon, 30 Jun 86 12:34:27 CDT
From: Robert C. Morecock <EPSYNET@UHUPVM1>
Reply-to: EPSYNET@UHUPVM1
Subject: Announcement of new bboard named psychnet
[Forwarded from Arpanet-BBoards by Laws@SRI-AI.]
PSYCHNET (tm) Psychology Newsletter and Mailing List EPSYNET@UHUPVM1
The Psychnet mailing list and Newsletter sends out information and
news to those who sign up. Within Bitnet, Psychnet is also a 24-hour
server machine which mails out files to users who first send the
PSYCHNET HELP command to userid UH-INFO at node UHUPVM1. OUTSIDE
BITNET Psychnet is a mailing list and Newsletter only. Once per week
ALL members receive the latest Psychnet Newsletter and Index of files
available on the server machine. Outside Bitnet, if a file looks
interesting send an E-mail request to userid EPSYNET (NOT uh-info) at
node UHUPVM1 and the file will be shipped out to you. Persons within
may also sign up for the mail list and will get the Newsletter and
Index along with other news. Users within Bitnet should get their
files directly from the server machine. An Exec file is available for
CMS users and COM files are available for VAX users within Bitnet.
If you have a file or idea you wish distributed to members of the
list you may send it to userid EPSYNET at node UHUPVM1 and it will be
sent out for you, usually with the week's Psychnet Newsletter. An
initial formal purpose of Psychnet is distribution of academic papers
in advance of this year's (1986) APA convention. Other purposes will
develop according to the needs and interests of the profession and
Psychnet users.
All requests to be added to or deleted from the mailing list, or to
have files distributed should be sent to:
Coordinator: Robert C. Morecock, Psychnet Editor, EPSYNET@UHUPVM1
------------------------------
Date: 6 Jul 86 22:37:15 EDT
From: Murray.Campbell@k.cs.cmu.edu
Subject: Hitech results
[Forwarded from the CMU bboard by Laws@SRI-AI.]
Hitech had a tough day, but set a new milestone for computer chess.
In round 8, Hitech drew International Master Michael Rohde, rated
2602, for what we believe is the first draw by a computer against
a titled player in regular tournament play. In round 9 Hitech
lost to Hungarian Grandmaster Guyla Sax, rated 2769.
Overall Hitech finished with 5.5/9, a respectable score given the
quality of the competition. The performance rating was approximately
2440.
------------------------------
Date: Sat, 5 Jul 86 21:53:36 EDT
From: ambar@EDDIE.MIT.EDU (Jean Marie Diaz)
Reply-to: ambar@mit-eddie.UUCP (Jean Marie Diaz)
Subject: Re: connectionism/complexity theory
(an article published in the April 1, 1985 edition of Fortune--posted
w/out permission)
WHAT BELL LABORATORIES IS LEARNING FROM SLUGS
[...] Inspired by the discoveries of physicist John Hopfield, a team
of Bell Labs scientists has been using research on slugs' brains to
develop a radically new type of computer. [...] The Bell computer
does not always select the single [best traveling salesman] route,
but--much like a human--it comes up with one of the better routes, and
never picks anything obviously loony.
New techniques for recording neurological activity in rats and in
three types of slugs--favored because of their large and accessible
nerve cells--are providing Bell's team with reams of information about
how neurons work. But the conceptual focus of the Bell project is the
model of the new neural-network computer created by Hopfield, 51, who
splits his time between Bell Labs and the California Institute of
Technology. Neural networks operate in the analog mode--when
information enters the brain, the neurons start firing and their
values, or "charges," rise and fall like electric voltage in analog
computers. When information is digested, the network settles down
into a so-called steady state, with each of its many neurons resting
close to their highest or lowest values--effectively, then, either on
or off. A computer designed to mimic a neural network would solve
problems speedily by manipulating data in analog fashion. but it
would report its findings when each neuron is either in the on or off
state, operating like a digital computer speaking a binary language.
The simulated computer designed by Hopfield and his AT&T colleagues
uses microprocessors to do the work of neurons. Each microprocessor
is connected to all others--as many neurons are interconnected--which
would make the machine costly and complex to build. Another major
difference between this computer and traditional ones is that memory
is not localized in any one processor or set of processors. Instead,
memory is in the patterns formed by all the neurons, whether on or
off, when they are in steady states. As a result, the computer can
deal with fragmentary or imprecise information. When given a
misspelled name, for example, it can retreive the full name and data
about the person by settling on the closest name in the network.
Though analog computation is astonishingly fast, it sacrifices
precision. Neural-network computers work best on problems that have
more than one reasonable solution. Examples include airline
scheduling, superfast processing for robots or weapons, and, more in
AT&T's line, routing long-distance telephone traffic.
-John Paul Newport
--
AMBAR
"I need something to change your mind...."
------------------------------
Date: 02 July 86 20:18 EDT
From: KVQJ%CORNELLA.BITNET@ucbvax.Berkeley.EDU
Subject: common sense
I have been thinking a lot about the notion of common sense and
its possible implementation into expert systems. Here are my ideas;
I would appreciate your thoughts.
Webster's Dictionary defines common sense as a 'practical knowledge'.
I contend that all knowledge both informal and formal comes from
this 'practical knowledge'.
After all, if one thinks about Physics,Logic,or Chemistry,much of it
makes practical sense in the real world. For example,a truck colliding
with a Honda civic will cause more destruction than 2 Hondas colliding
together. I think that people took this practical knowledge of the world
and developed formal principles.
It is common sense which distiguishes man from machine. If a bum on
the street were to tell you that if you give him $5.00 he will make you
a million dollars in a week, you would generally walk away and ignore him.
If the same man were to input it into a so called intelligent machine,the
machine would not know if he was Rockefeller or an indigent.
My point is this, I think it is intrinically impossible to program
common sense because a computer is not a man. A computer cannot
experience what man can;it can not see or make ubiquitous judgements
that man can. We may be able to program common-sense like rules into
it,but this is not tantamount to real world common sense because real
world common sense is drawn from a 'database' that could never be
matched by a simulated one.
Thank you for listening.
sherry marcus kvqj@cornella
------------------------------
Date: Thu, 3 Jul 86 17:07 EST
From: MUKHOP%RCSJJ%gmr.com@CSNET-RELAY.ARPA
Subject: Creativity and Analogy
Jay Weber makes some interesting observations:
> Consider the following view of analogy, consistent with its formal
> treatment in many sources. A particular analogy, e.g. that which
> exists between a battery and a reservoir, is a function that maps
> from one category (set of instances) to another. Equivalently we
> can view this function as a relation R between categories, in this
> case we have a particular kind of "storage capability". This relation
> is certainly
>
> 1) reflexive. "A battery is like a battery" (under any relation)
>
> 2) symmetric. "A battery is like a reservoir" implies
> "A reservoir is like a battery" under the same relation R
>
> 3) transitive. "A battery is like a reservoir" and
> "A reservoir is like a ketchup bottle" imply
> "A battery is like a ketchup bottle" WHEN THE SAME
> ANALOGY HOLDS BETWEEN THEM (same R).
>
> Then any analogy R is an equivalence relation, partitioning the space
> of categories. Each analogy corresponds to a node in an abstraction
> hierarchy which relates all of the sub-categories, SO THE SPACE OF
> ANALOGIES MAPS ONTO THE SPACE OF ABSTRACTIONS, and so under these
> definitions analogy and abstraction are equivalent.
I agree with your reasoning and the conclusion that analogies map ONTO
abstractions--in fact, I think they map ONTO and ONE-TO-ONE (in other words
there is a one-to-one correspondence). Also, EACH analogy (and abstraction)
partitions the space of categories into two subspaces. However, the SPACE
of analogies does not partition the space of categories because the world
can concurrently be modeled by multiple abstraction lattices (not necessarily
hierarchies) in which the transitivity property may not hold. Consider the
following:
a) "A battery is like a reservoir" (storage capability)
AND b) "A reservoir is like a pond" (body of water)
DO NOT IMPLY:
c) "A battery is like a pond"
> ...
> no one could agree to a particular formal definition of the term "analogy",
> since we all have individual formal definitions by virtue of the fact that
> we will answer yes or no when given a potential analogy instance, so we
> are formal language acceptors with our senses as input. This is what I
> mean by a "slippery" term, i.e. one that has drastically different
> meanings depending on its user. This is why I say a formal definition
> of analogy would satisfy very few people.
I am glad that scientists, by and large, have not let "slipperiness" in
some linguistic sense (as you define it) discourage them from carrying on
their research. Of course, all research issues are "slippery" in a conceptual
sense, by definition. (I would also expect a high degree of correlation
between linguistic and conceptual "slipperiness").
There has been some discussion now (in AIList) on the relationship
between "creativity" and "making-interesting-analogies". Is it mere
empirical association or are there stronger causal links? One extreme
view is that the definition of creativity is "making interesting analogies".
Some recent illuminating discussions in this forum suggest that the ability
to synthesize concepts from partial concepts in other domains is a key
ingredient of a great number of creative activities.
Is there some creative task that could not be performed by a machine
capable of making complex analogies in an interesting manner--a complex
analogy being defined as a many-to-one transformation between domains (as
opposed to a simple analogy which is a one-to-one mapping)?
Uttam Mukhopadhyay
Computer Science Dept.
General Motors Research Labs
------------------------------
End of AIList Digest
********************
∂10-Jul-86 0152 LAWS@SRI-AI.ARPA AIList Digest V4 #165
Received: from SRI-STRIPE.ARPA by SU-AI.ARPA with TCP; 10 Jul 86 01:52:44 PDT
Date: Wed 9 Jul 1986 21:30-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #165
To: AIList@SRI-AI
AIList Digest Thursday, 10 Jul 1986 Volume 4 : Issue 165
Today's Topics:
Books - Lisp Texts,
Natural Language - Integrated Systems,
AI Tools - VAX LISP Sources,
Theory - Intelligence Tests & Analogy & Common Sense &
Representationalist Perception
----------------------------------------------------------------------
Date: Tue, 8 Jul 86 13:08 EST
From: HAFNER%northeastern.edu@CSNET-RELAY.ARPA
Subject: Lisp texts
Replying to Mark Richer's query about texts for teaching Lisp:
There are a number of good textbooks on Lisp. I prefer Winston & Horn
because of the emphasis on applications of Lisp, especially to AI.
However, whatever text you choose, you should supplement it with
"The Little Lisper" 2nd edition by Dan Friedman and Matthias Felleisen.
TLL is a wonderful teaching tool - it is skill-oriented, thorough,
and entertaining. I expect it will be especially useful for the
students who are not math or CS majors. Good luck!!
Carole Hafner
hafner@northeastern
P.S. Regarding the appropriateness of comments on Lisp programming
on the AILIST: I find this material interesting, relevant, and highly
appropriate. Lisp is the medium for most AI research, and effective
use of that medium is of great concern to many. Ditto for other programming
methods (logic programming, object oriented programming, etc.)
------------------------------
Date: Tue, 8 Jul 86 09:09:13 edt
From: Eric Nyberg <ehn0%gte-labs.csnet@CSNET-RELAY.ARPA>
Subject: Re: Architectures for interactive systems?
> There seems to have been a great deal of work done in
> natural language processing, yet so far I am unaware of
> any attempt to build a practical yet theoretically well-
> founded interactive system or an architecture for one.
...
> Many of the sub-problems have been studied at least once.
> Work has been done on various types of necessary response
> behavior, such as clarification and misconception correction.
> Work has been done on parsing, semantic interpretation, and
> text generation, and other problems as well. But has any
> work been done on putting all these ideas together in a
> "real" system?
...
> I would be interested to get any references to work on such
> integrated systems. Also, what are people's opinions on this
> subject: are practical NLP too hard to build now?
> Brant Cheikes
I am part of a research project that has been investigating
integrated architectures for intelligent interfaces at
GTE Laboratories. A good overview of our recent work can be
found in the Summer issue of IEEE Expert, in a paper entitled
"An Intelligent Database Assistant" [Jakobson 86].
The phrase "practical yet theoretically well-founded" strikes
at one of the basic difficulties in building a natural language
interface as part of a working system - it should work in a
reasonable amount of time, yet be as flexible as possible in
the different kinds of utterances it can understand. The two
extremes are seen in a keyword-based system, where parsing is done by
a hand-coded program, versus a formally complete English grammar system,
where parsing is done by a large, complex data structure (e.g., an ATN).
The simplifying requirement we have placed on our applications is
quite similar to the desire for a narrow, well-defined domain
in building expert systems. If the domain of application for the
intelligent interface is well-defined, and fairly narrow,
a semantic grammar approach can be used quite successfully to
provide good performance with reasonably complete coverage.
The semantic grammar approach that we use is based on case
theory, a linguistic paradigm that was investigated in the
late sixties and early seventies (for a good summary of case-
based approaches, see [Bruce 75]). The case-frame approach to
parsing natural language has also been researched by Jaime
Carbonell, Phil Hayes [Hayes 85], and others at CMU. Case frame
parsing forms the basis for the Language Craft product offered
by Carnegie Group.
Of course, there are some drawbacks to the approach, most notably
a somewhat informal, arbitrary definition of syntax, which makes
the case-frame approach less satisfying from a theoretical
linguistic viewpoint. However, some of the more complex syntactic
constructions (like relative clauses) seem to be less important in
this kind of system than discourse phenomena like ellipsis and
anaphora. The dialog our system has with a user is very
task oriented, and generally does not require the more complex
constructions of unrestricted English prose.
In my opinion, "practical" and "theoretically well-founded" are two
qualities that a natural language system can have, and for each
potential application, the proper mix of efficiency and coverage
must be found.
-- Eric Nyberg
References
----------
[Bruce 75]
Bruce, B., "Case Systems for Natural Language," Artificial
Intelligence, Vol. 6, No. 4, April 1975, pp. 327-360.
[Hayes 85]
Hayes, P., et. al., "Semantic Caseframe Parsing and Syntactic
Generality," Proc. 23rd ACL, Jul. 1985, pp. 153-160.
[Jakobson 86]
Jakobson, G., et. al., "An Intelligent Database Assistant,"
IEEE Expert, Vol. 1, No. 2, Summer 1986, pp. 65-78.
{other references to intelligent interfaces can be found in the
bibliography of [Jakobson 86]}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CSNET: ehn0@gte-labs Eric H. Nyberg, 3rd
UUCP: ..harvard!bunny!ehn0 GTE Laboratories, Dept. 317
ARPA: ehn0%gte-labs@csnet-relay 40 Sylvan Rd.
Waltham, MA 02254
(617) 466-2518
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
------------------------------
Date: Wed, 9 Jul 86 16:55:44 pdt
From: saber!matt@SUN.COM (Matt Perez)
Subject: Re: Architectures for interactive systems?
> I would be interested to get any references to work on such
> integrated systems.
Sorry, I have only a vague reference (see below), but I
do have a couple of comments.
> Also, what are people's opinions on this
> subject: are practical NLP too hard to build now?
I don't think it is impossible to integrate such a system.
For example, the *Unix Consultant* at UCB is such an
integrated system, albeit for research rather than
commercial purposes. But its application is practical
enough: to provide an on-line Unix expert which can
communicate with the user in natural language, for
input as well as in its responses.
> Should we
> leave the construction of practical systems to private enter-
> prise and restrict ourselves to the basic research problems?
Lord, NOOOOOOOOOOOO. The integration work is just
beginning and I suspect it is a lot more complicated than
taking care of the individual subproblems. I'd say that
"the construction of practical systems" IS a basic
research problem. All that private enterprise can
afford to do is implement what works, and as you well
pointed out, ain't much that works so far.
As an alternative, I offer that Natural Language
by itself is not that natural a way to communicate
anyways. In many instances a Graphical Interface is
much more appropriate. Of course, by Natural Language I
mean written language or even speech; by Graphical
Interface I mean Graphics (generative and otherwise)
display and feedback and input devices that exploit our
kinetic abilities. Thus I rather point at a feature in
a good display than describe the same feature verbally.
If you don't agree with me on that, try to describe a
scene to someone over the phone.
In other instances, formulae is the communications tool
of excellence. It depends. Ideally, I say, the user
interface should support all of the above!
Basically, however, I agree with you in the following
way: let's first learn to build systems (and enumerate
architectures) that support (solely) a Natural Language
interface. Ditto for graphics. Ditto for formulae.
Then, let's see if we can take the best of each and put
them together reliably and appropriately. And if that
ain't basic research ...
* Matt Perez * DISCLAIMER: beis-ball has bean bery, bery guud too me
matt@saber.uucp sun!saber!matt@decwrl.dec.com ...{ihnp4,sun}!saber!matt
Saber Technology Corp / 2381 Bering Drive / San Jose, CA 95131 (480) 435-8600
------------------------------
Date: Mon, 7 Jul 86 22:51:19 edt
From: beer%case.csnet@CSNET-RELAY.ARPA
Subject: VAX LISP Sources
In a previous AIList (Vol. 4, Issue 127), I posted a message concerning
the availability of a set of tools and utilities for VAX LISP. At that
time, only the object code was in the public domain. However, by
popular request, we have arranged to make the source code for these
facilities available. Anyone who requested a tape of the object code will
be sent the source. The description of the facilities is repeated below.
Here at the Center for Automation and Intelligent Systems Research at
Case Western Reserve University, we have developed a number of tools and
utilities for VAX LISP. They include extensions to the control and string
manipulation primitives, a simple pattern matcher, a pattern-based APROPOS
facility, a pattern-based top-level history mechanism, an extensible top-level
command facility, an extensible DESCRIBE facility, and an implementation of
Flavors. These facilities are described in detail in a technical report,
"CAISR VAX LISP Tools and Utilities" (TR-106-86).
A tape containing the VAX LISP source for these facilities is available for
a $35 shipping and handling fee.
Randall D. Beer
(beer%case@CSNet-Relay.ARPA)
Center for Automation and Intelligent Systems Research
Case Western Reserve University
Glennan Bldg., Room 312
Cleveland, OH 44106
------------------------------
Date: 7 Jul 1986 1059-PDT (Monday)
From: Eugene miya <eugene@ames-aurora.arpa>
Subject: A comment to an interesting posting to net.ai
<"Expert systems" are not AI.>
The following appeared on the USENET's net.ai list (distinct from
the mod.ai list gateway to the ARPAnet. My commentary follows:
>From: michaelm@bcsaic.UUCP (michael maxwell)
>Subject: Re: The Turing-Ring Test -- A Limitation Game.
>Message-ID: <589@bcsaic.UUCP>
>Date: 3 Jul 86 17:14:26 GMT
>
>In article <7101.8606281319@maths.qmc.ac.uk> gcj@qmc-ori.UUCP (The Joka):
>>The following test has been proposed. Appoint one (or more)
>>adjudicators to decide on which of the two parties in the
>>test, persons A and B, is talking to a telephone answering
>>machine and which is talking to a human being. This test is
>>not limited to textual information, although person A should
>>relay the same information as person B.
>
>Wonderful idea! An even better idea: You've probably answered the phone,
>only to find that the voice on the other end is a computerized "survey". I
>propose the following test: which of two computerized "survey"
>programs is talking to a telephone answering machine and which is talking to a
>human being...:-)
>--
>Mike Maxwell
>Boeing Artificial Intelligence Center
> ...uw-beaver!uw-june!bcsaic!michaelm
I have been thinking about the characteristics of a real Turing test.
Here are some thoughts and some questions. 1) The Turing test is basically
a psychological test of "discrimination" [a loaded word in our society
today]. 2) given that the task is to create a machine "with intelligence,"
a) how long should such a test be? b) what is the shortest `length'
of such a test? 3) Since the objective is whether a machine is
intelligent or not (as opposed to `how' intelligent, i.e. an `intelligence
test'), how should the test be composed? It seems that it can be made a
signal detection task, and if so, it will have the standard concepts
of false-positives and true-negatives (all that stuff from radar).
It seems that such a test would be composed of rather difficult questions
of the type: "Your wife (husband) and your daughter (son)have fallen into
the water. You are positioned in the middle and can only save one.
Who do you save?"
Single difficult questions are probably insufficient. Are aggregate
questions any better? Humans are bound to `fail' many questions.
Such questions would be great for a conference to be held in say 2000
when the 50th anniversery of Turing's original paper was published.
From the Rock of Ages Home for Retired Hackers:
--eugene miya
NASA Ames Research Center
eugene@ames-aurora.ARPA
"You trust the `reply' command with all those different mailers out there?"
{hplabs,hao,dual,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene
[Sounds like the test for androids in Blade Runner. The problem of
discriminating between two classes of intelligence is much easier than
that of discriminating intelligence from all possible forms of
nonintelligence. By the way, the fastest way to identify human
intelligence may be to look for questions that a human will recognize
as nonsense or outside his expected sphere of knowledge ("How long
would you broil a 1-pound docket?" "Is the Des Moines courthouse taller
or shorter than the Wichita city hall?") but that an imitator might try
to bluff through. -- KIL]
------------------------------
Date: Tue, 8 Jul 86 17:47:02 edt
From: Jay Weber <jay@rochester.arpa>
Reply-to: jay@rochester.UUCP (Jay Weber)
Subject: Transitivity of a *particular* analogy, and let's do science!
In repsonse to my claim that particular analogies are transitive,
Uttam Mukhopadhyay writes:
>However, the SPACE of analogies does not partition the space of
>categories because the world can concurrently be modeled by multiple
>abstraction lattices (not necessarily hierarchies) in which the
>transitivity property may not hold. Consider the following:
>
> a) "A battery is like a reservoir" (storage capability)
> AND b) "A reservoir is like a pond" (body of water)
>
>DO NOT IMPLY:
> c) "A battery is like a pond"
But I orginally wrote:
>> 3) transitive. "A battery is like a reservoir" and
>> "A reservoir is like a ketchup bottle" imply
>> "A battery is like a ketchup bottle" WHEN THE SAME
>> ANALOGY HOLDS BETWEEN THEM (same R).
Note the use of "SAME ANALOGY" which is not the same as "any analogy"
as is the basis of Uttam's example above. Of course, any two categories
are analogous with respect to some mapping function, so the relation
"is analogous to" is vacuous. This distinction tends to be obscured
by the fact that most linguistic examples of analogy (like those above)
leave the mapping function implicit.
Furthermore, I did not claim that the SPACE of analogies partitions the
space of categories, but that a particular analogy does:
>> Then any analogy R is an equivalence relation, partitioning the space
>> of categories.
I also questioned the value of asking whether "creativity" is equivalent
to "making interesting analogies" to which Uttam replied:
> I am glad that scientists, by and large, have not let "slipperiness" in
>some linguistic sense (as you define it) discourage them from carrying on
>their research.
Proper scientists (by definition) do not construct theories about things
that cannot be empirically examined, e.g. using structure mapping functions
to model the communal descriptive definition of the English word
"creativity". Scientists pick testable domains such as problem solving
where you can test predictions of a particular theory with respect to
correct problem solving. In the past, scientists have left debate over
such concepts as "truth" and "beauty" to philosophers, and I think we
should do the same with "creativity" and "intelligence". In Cognitive
Science, researchers have too often exaggerated the impact of their work
through the careless and unscientific use of such terms.
Jay Weber
Computer Science Department
University of Rochester
Rochester, NY 14627
jay@rochester
------------------------------
Date: 8 Jul 86 17:30 PDT
From: Newman.pasa@Xerox.COM
Subject: Re: Common Sense
Philosophically, Sherry Marcus' ideas about common sense are poor in the
same sense that I think Searle and Dreyfus' ideas about why AI won't
ever happen are poor. As near as I can tell all three end up with some
feature of human intelligence which cannot be automated for basically
unexplained reasons. Marcus' problem is simpler than the others (why
can't a computer have a real world common sense database like a
human's?), but it is the same basic philosophical trap. All three appear
to believe that there is some magical property of human intelligence
(Searle and Dreyfus appear to believe that there is something special
about the biological nature of human intelligence) which cannot be
automated, but none can come up with a reason for why this is so.
Comments?? I would particularly like to hear what you think Searle or
Dreyfus would say to this.
>>Dave
------------------------------
Date: Wed, 9 Jul 86 09:18:47 -0200
From: Eyal mozes <eyal%wisdom.bitnet@WISCVM.ARPA>
Subject: Re: Representationalist Perception
> The
> 'representatonalist' account of perception does NOT claim that instead of
> perceiving the world, we perceive internal representations of the world.
> That would indeed be a position with many difficulties. Rather, it says
> that the WAY we perceive the world is BY making representations of it.
> The data structures are, to put it simply, the output of the perceptual
> process, not its input.
I would agree with Gibson (and with Kelley) that this boils down to the
same thing.
The "output" of perception (if such a term is appropriate) is our
awareness. Realists claim that this awareness is directly of external
objects. Representationalists, on the other hand, claim that we are
directly aware only of internal representations, created by a process
whose input are external objects; this means that we are aware of
external objects only INDIRECTLY. That is the position Gibson and
Kelley argue against, and I think they do understand it accurately.
Note that the above applies only to PERCEPTUAL representationalists.
It does not apply to COGNITIVE representationalists, who may agree that
perception is direct, but claim that internal representations are then
formed for the purpose of conceptual thinking. Gibson claimed that
concept-formation is direct as well; but on this point, Kelley
disagrees with him (this is indicated by his discussion of the issue in
chapter 7 of "The Evidence of the Senses"; by his paper "A Theory of
Abstraction", published in "Cognition and Brain Theory", vol. 7, no. 3
and 4, Summer/Fall 1984; and by his references to Ayn Rand's
"Introduction to Objectivist Epistemology").
Eyal Mozes
BITNET: eyal@wisdom
CSNET and ARPA: eyal%wisdom.bitnet@wiscvm.ARPA
UUCP: ..!ucbvax!eyal%wisdom.bitnet
------------------------------
End of AIList Digest
********************
∂10-Jul-86 0249 LAWS@SRI-AI.ARPA AIList Digest V4 #164
Received: from SRI-STRIPE.ARPA by SU-AI.ARPA with TCP; 10 Jul 86 02:49:29 PDT
Date: Wed 9 Jul 1986 21:22-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #164
To: AIList@SRI-AI
AIList Digest Thursday, 10 Jul 1986 Volume 4 : Issue 164
Today's Topics:
Seminars - Mathematical Games (SU) &
Discovery of Algorithms from Weak Methods (Rutgers) &
The Koko Connection: Interspecies Communication (PARC) &
Default Theories and Autoepistemic Logic (SRI),
Conference - Expert Systems In Government
----------------------------------------------------------------------
Date: Tue 1 Jul 86 13:14:30-PDT
From: Ilan Vardi <ZURDI@SU-SCORE.ARPA>
Subject: Seminar - Mathematical Games (SU)
[Forwarded from the Stanford bboard by Laws@SRI-AI.]
The first meeting of the games seminar was quite a success
with more than 20 people showing up.I'm hoping this will go on,
so I've decided to lure people with FOOD to compete with various
departmental teas.
The subject this time around will be partizan games, which
are games where opponents have different colours and have
different moves available to them e.g. Go, chess etc.
For people who weren't around last time the subject was
IMPARTIAL games where both layers have the same alternatives.
I showed that all those games can be reduced to one game called
NIM which has a simple strategy explanable in five minutes.
If you want to read up about thursday's talk, just pick up
the copy of Knuth's "Surreal Numbers" that's on reserve at the
Math Library.
Remember that this meeting is at
3:00 p.m. room 381 T Math Department.
Which is a CHANGE OF TIME from last week at 2:15 p.m..
If you have any comments, or want to get directly on a mailing
list, just mail your answer here at zurdi@score.
Have a nice day!
Ilan Vardi
------------------------------
Date: 2 Jul 86 15:28:41 EDT
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Seminar - Discovery of Algorithms from Weak Methods (Rutgers)
DISCOVERY OF ALGORITHMS
FROM WEAK METHODS
Armand E. Prieditis
Weak problem-solving methods (e.g. means-ends analysis, breadth-
first search, best-first search) all involve a search for some sequence
of operators that will lead from an initial state to a goal state.
This paper shows how it is possible to learn operators whose bodies
contain algorithmic control constructs (e.g. loops, sequences,
conditionals) such that the control construct itself applies the
sequence needed to lead from the initial state to a goal state without
a search for the sequence. By using explanation-based generalization
[EBG] and an explicit theory of algorithms, the method learns
operators (whose bodies contain algorithmic control constructs) that
represent logically valid generalizations of the solution sequence.
Where: Hill Center, Room 423
When: Tuesday, July 15th
Speaker's EMail address: PRIEDITIS@RED.RUTGERS.EDU
------------------------------
Date: Mon, 7 Jul 86 10:36:11 PDT
From: Hibbert.pa@Xerox.COM
Reply-to: hibbert.pa@Xerox.COM
Subject: Seminar - The Koko Connection: Interspecies Communication (PARC)
PARC Forum
Thursday, July 10, 1986
3:45PM, PARC Auditorium
Mitzi Phillips
Research Assistant and Lecturer,
The Gorilla Foundation
For 13 years the Gorilla Foundation has been dedicated to teaching
American Sign langualtge to Koko, a 250-lb Lowland Gorilla. This talk
shares the advances made in the field of interspecies communication.
Through sharing personal experiences with Koko we will explore the
valuable information learned about animal intelligence.
This Forum is OPEN. All are invited.
Host: Chris Hibbert (System Concepts Lab, 494-4382)
Refreshments will be served at 3:30 pm
Requests for videotaping should be sent to Susie Mulhern
<Mulhern:PA:Xerox or Mulhern.pa@Xerox.Com> before Tuesday noon.
Directions to PARC:
The PARC Auditorum is located at 3333 Coyote Hill Rd. in Palo Alto. We
are between Page Mill Road (west of Foothill Expressway) and Hillview
Avenue, in the Stanford Research Park. The easiest way here is to get
onto Page Mill Road, and turn onto Coyote Hill Road. As you drive up
Coyote Hill, PARC is the only building on the left after you crest the
hill. Park in the large parking lot, and enter the auditorium at the
upper level of the building. (The auditorum entrance is located down
the stairs and to the left of the main doors.)
------------------------------
Date: Wed 9 Jul 86 13:08:44-PDT
From: Margaret Olender <OLENDER@SRI-WARBUCKS.ARPA>
Subject: Seminar - Default Theories and Autoepistemic Logic (SRI)
ON THE RELATION BETWEEN DEFAULT THEORIES AND AUTOEPISTEMIC LOGIC
Kurt Konolige (KONOLIGE@SRI-AI)
Artificial Intelligence Center
SRI International
and
CSLI, Stanford University
11:00 AM, MONDAY, July 14
SRI International, Building E, Room EK228
Default theories are a formal means of reasoning about defaults: what
normally is the case, in the absence of contradicting information.
Autoepistemic theories, on the other hand, are meant to describe the
consequences of reasoning about ignorance: what must be true if a
certain fact is not known. Although the motivation and formal
character of these systems are different, a closer analysis shows that
they bear a common trait, which is the indexical nature of certain
elements in the theory. In this paper we treat both autoepistemic and
default theories as special cases of a more general indexical theory.
The benefits of this analysis are that it gives a clear (and clearly
intuitive) semantics to default theories, and combines the expressive
power of default and autoepistemic logics in a single framework.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
------------------------------
Date: Wed, 2 Jul 86 10:30:06 edt
From: camis..duke@mitre.ARPA
Subject: Conference - Expert Systems In Government
The Second Annual Expert Systems in Government Conference, sponsored by
the Mitre Corporation and the IEEE Computer Society in association with
the AIAA National Capital Section will be held October 20-24, 1986 at
the Tyson's Westpark Hotel in McLean, VA. The tentative program, subject
to changes and additions, is as follows:
October 20-21 Tutorials
Monday, October 20
Full Day Tutorial: Advanced Topics in Expert Systems
by Kamran Parsaye, IntelligenceWare, Inc.
Morning Tutorial: Knowledge Base Design for Rule Based Expert Systems
by Casimir Kulikowski, Rutgers University
Afternoon Tutorial: Knowledge Base Acquisition and Refinement
by Casimir Kulikowski, Rutgers University
Tuesday, October 21
Morning Tutorial: Distributed Artificial Intelligence
by Kamal Karna,
Computer Communications & Graphics Associates, Inc.
and
Barry Silverman, George Washington University
Morning Tutorial: Introduction to Common Lisp
by Carl Hewitt, MIT AI Lab
Afternoon Tutorial: Lisp for Advanced Users
by Carl Hewitt, MIT AI Lab
Afternoon Tutorial: The Management of Expert System Development
by Nancy Martin, Softpert Systems
October 22-24 Technical Program
Wednesday, October 22
9 - 10:30
Conference Chairman's Welcome
Keynote Address: Douglas Lenat, MCC
Program Agenda
11am - 12pm
Track A: Military Applications I
Bonasso, Benoit, et al.;
An Experiment in Cooperating Expert Systems for Command and Control
Major R. Bahnij, Major S. Cross;
A Fighter Pilot's Intelligent Aide for Tactical Mission Planning
G. Loberg, G. Powell
Acquiring Expertise in Operational Planning: A Beginning
Track B: Systems Engineering
R. Entner, D. Tosh; Expert Systems Architecture for Battle Management
H. Hertz; An Attribute Referenced Production System
B. Silverman; Facility Advisor: A Distributed Expert System Testbed for
Spacecraft Ground Facilities
12pm - 1pm Lunch, Distinguished Guest Address, The Honorable Charles Rose
1pm - 2:30pm
Track A: Knowledge Acquisition I
J. Boose, J. Bradshaw; NeoETS: Capturing Expert System Knowledge
K. Kitto, J. Boose; Heuristics for Expertise Transfer
M. Chignell; The Use of Ranking and Scaling in Knowledge Acquisition
Track B: Expert Systems in the Nuclear Industry
D. Sebo et al.; An Expert System for USNRC Emergency Response
D. Corsberg; An Object-Oriented Alarm Filtering System
J. Jenkins, W. Nelson; Expert Systems and Accident Management
3pm - 5pm
Track A: Expert Systems Applications I
R. Tong, et al.; An Object-Oriented System for Information Retrieval
D. Niyogi, S. Srihari; A Knowledge-based System for Document Understanding
R. France, E. Fox; Knowledge Representation in Coder
Track B: Diagnosis and Fault Analysis
M. Taie, S. Srihari; Device Modeling for Fault Diagnosis
Z. Xiang, S. Srihari; Diagnosis Using Multi-level Reasoning
B. Dixon; A Lisp-Based Fault Tree Development Environment
Panel Track:
1pm - 5pm Management of Uncertainty in Expert Systems
Chair: Ronald Yager, IONA College
Participants: Lofte Zadeh, UC Berkeley
Piero Bonissone, G.E.
Laveen Kanal, University of Maryland
Thursday, October 23
9am - 10:30am
Track A: Parallel Architectures
L. Sokol, D. Briscoe; Object-Oriented Simulation on a
Shared Memory Parallel Architecture
H. Sowizral; A Basis for Distributed Blackboards
J. Gilmer; Parallelism Issues in the CORBAN C2I Representation
Track B: Aerospace Applications of Expert Systems
J. Popolizio, J. Feinstein; Space Station Security: An Expert Systems Approach
D. Zoch; A Real-time Production System for Telemetry Analysis
J. Schuetzle; A Mission Operations Planning Assistant
P. Roach, D. Brauer; Ada Knowledge Based Systems
F. Rook, T. Rubin; An Expert System for Conducting a
Sattelite Stationkeeping Maneuver
Panel Track: Star Wars and AI; Chair: John Quilty, Mitre Corp.
11am - 12pm
Plenary Address:
B. Chandrasekaran; The Future of Knowledge Acquisition
12pm - 1pm Lunch
1pm - 2:30pm
Track A: Inexact and Statistical Measures
K. Lecot; Logic Programs with Uncertainties
N. Lee; Fuzzy Inference Engines in Prolog/P-Shell
J. Blumberg; Statistical Entropy as a Measure of Diagnostic Uncertainty
Track B: High Level Tools for Expert Systems
S. Shum, J.Davis; Use of CSRL for Diagnostic Expert Systems
E. Dudzinski, J. Brink; CSRL: From Laboratory to Industry
D. Herman, J. Josephson, R. Hartung; Use of the DSPL
for the Design of a Mission Planning Assistant
J. Josephson, B. Punch, M. Tanner; PEIRCE: Design Considerations
for a Tool for Abductive Assembly for Best Explanation
Panel Track: Application of AI in Telecommunications
Chair: Shri Goyal, GTE Labs
Participants: Susan Conary, Clarkson University
Richard Gilbert, IBM Watson Research Center
Raymond Hanson, Telenet Communications
Edward Walker, BBN
Richard Wolfe, ATT Bell Labs
3pm - 5pm
Track A: Expert System Implementations
S. Post; Simultaneous Evaluation of Rules to Find Most Likely Solutions
L. Fu; An Implementation of an Expert System that Learns
R. Frail, R. Freedman; OPGEN Revisited
Track B: Expert System Applications II
R. Holt; An Expert System for Finite Element Modeling
A. Courtemanche; A Rule-based System for Sonar Data Analysis
F. Merrem; A Weather Forecasting Expert System
R. Ahad, A. Basu; Explanation in an Expert System
W. Vera, R. Bozolcz; AI Techniques Applied to Claims Processing
Panel Track: Command and Control Expert Systems
Chair: Andrew Sage, George Mason University
Participants: Peter Bonasso, Mitre
Stephen Andriole, International Information Systems
Paul Lehner, PAR
Leonard Adelman, PAR
Walter Beam, George Mason University
Jude Franklin, PRC
Friday, October 24
9am - 12pm: Classified Track
Classified Working Session: The community building expert systems for
classified applications is unsure of the value and feasibility of some
form of communication within the community. This will be a session
consisting of discussions and working sessions, as appropriate, to
explore these issues in some depth for the first time, and to make
recommendations for future directions for the classified community.
9am - 10:30am
Track A: Military Applications
K. Michels, J. Burger; Missile and Space Mission Determination
J. Baylog; An Intelligent System for Underwater Tracking
J. Neal et al.; An Expert Advisor on Tactical Support Jammer Configuration
Track B: Expert Systems in the Software Lifecycle
D. Rolston; An Expert System for Reducing Software Maintenance Costs
M. Rousseau, M. Kutzik; A Software Acquisition Consultant
R. Hobbs, P. Gorman; Extraction of Data System Requirements
Panel Track: Next Generation Expert System Shells
Chair: Art Murray, George Washington University
Participants: Joseph Fox, Software A&E
Barry Silverman, George Washington University
Lee Erman, Teknowledge
Chuck Williams, Inference
John Lewis, Martin Marietta Research Labs
11am - 12pm
Track A: Spacecraft Applications
D. Rosenthal; Transformation of Scientific Objectives
into Spacecraft Activities
M. Hamilton et al.; A Spacecraft Control Anomaly Resolution Expert System
Track B: Knowledge Acquistion and Applications
E. Tello; DIPOLE - An Integrated AI Architecture
H. Chung; Experimental Evaluation of Knowledge Acquisition Methods
Panel Track: Government Funding of Expert Systems
Chair: Commander Allen Sears, DARPA
Conference Chairman: Kamal Karna
Unclassified Program Chairman: Kamran Parsaye
Classified Program Chairman: Richard Martin
Panels Chairman: Barry Silverman
Tutorials Chairman: Steven Oxman
Registration information can be requested from
IEEE Computer Society
Administrative Office
1730 Massachusetts Ave. N.W.
Washington, D.C. 20036-1903
(202) 371-0101
------------------------------
End of AIList Digest
********************
∂14-Jul-86 1428 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #166
Received: from SRI-STRIPE.ARPA by SU-AI.ARPA with TCP; 14 Jul 86 14:28:25 PDT
Date: Mon 14 Jul 1986 10:21-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #166
To: AIList@SRI-AI
AIList Digest Monday, 14 Jul 1986 Volume 4 : Issue 166
Today's Topics:
Philosophy - Representationalist Perception & Searle's Chinese Room
----------------------------------------------------------------------
Date: Fri, 11 Jul 86 17:03:37 edt
From: David Sher <sher@rochester.arpa>
Reply-to: sher@rochester.UUCP (David Sher)
Subject: Re: Representationalist Perception
In article <8607100457.AA12123@ucbvax.Berkeley.EDU> eyal@wisdom.BITNET
(Eyal mozes) writes:
>The "output" of perception (if such a term is appropriate) is our
>awareness. Realists claim that this awareness is directly of external
>objects. Representationalists, on the other hand, claim that we are
>directly aware only of internal representations, created by a process
>whose input are external objects; this means that we are aware of
>external objects only INDIRECTLY. That is the position Gibson and
>Kelley argue against, and I think they do understand it accurately.
I may be confused by this argument but as far as visual perception is
concerned we are certainly not aware of the firing rates of our individual
neurons. We are not even aware of the true wavelengths of the light that
hits our eyes. We have special algorithms built into our visual hardware
that implements an algorithm that decides based on global phenomena the
color of the light in the room and automatically adjusts the colors of
percieved objects to compensate (this is called color constancy). However
this mechanism can be fooled. Given that we don't directly percieve
the lightwaves hitting our eyes how can we be directly percieving objects
in the world? Does percieve in this sense mean something different from
the way I am using it? I know that for ordinary people the only images
consciously accessible are quite heavily processed to compensate for
noise and light intensity and to take into account known facts about
the tendencies of objects to be continuous and to fit into know shapes.
I don't know how under such circumstances we can be said to be directly
aware of any form of visual input except internal representations.
My guess is that you are using words in a technical way that has
confused me. But perhaps you can clear up this.
------------------------------
Date: Mon 14 Jul 86 10:09:34-PDT
From: Stephen Barnard <BARNARD@SRI-AI.ARPA>
Subject: perception (realist vs. representationalist position)
Maybe I've never really understood the arguments of the so-called
"perceptual realists" (Gibson, etc.), because their position that we
do not build internal representations of the objects of perception,
but rather perceive the world directly (whatever that means), seems
obviously wrong. Consider what happens when we look at a realistic
painting. We can, at one level, see it as a painting, or we can see
it as a scene with no objective existence whatsoever. How could this
perception possibly be interpreted as anything but an internal
representation?
In many or perhaps even all situations, the stimuli available to our
sense organs are insufficient to specify unique external objects. The
job of perception, as opposed to mere sensation, is to complement the
stimulus information to create a fleshed-out interpretation that is
consistent both with the stimulus and with our knowledge and
expectations. Gibson emphasized the richness of the visual stimulus,
arguing that much more information was available from it than was
generally realized. But to go from this observation to the conclusion
that the stimulus is in all cases sufficient for perception is clearly
not justified.
------------------------------
Date: Fri, 11 Jul 86 15:33:04 edt
From: Tom Scott <scott%bgsu.csnet@CSNET-RELAY.ARPA>
Subject: Knowledge is structured in consciousness
Two recent postings to this newsgroup by Eyal Mozes and Pat
Hayes on the (re)presentation of perception and knowledge in
integrated sensory/knowledge systems indicate the validity of
philosophy in the theoretical foundations of knowledge science, which
includes AI and knowledge engineering. I'd prefer not to make a
public choice between Mozes' vs. Hayes' position, but I'm impressed
by the sincerity of their arguments and the way each connects
philosophy and technology.
Hayes remarks that "The question the representational position
must face is how such things (representations) can serve as percepts
in the overall cognitive framework." This is indeed a serious problem
facing the designers of fifth- and sixth-generation intelligent
systems. Here is a two-hundred-year-old approach to the problem, an
approach that not only can help the representationalists but can also
be of value to realist and idealist (re)constructions of knowledge
within the simulated consciousness of a knowledge system:
REPRESENTATION
|
+---------------+-------------+
| |
UNCONSCIOUS CONSCIOUS
REPRESENTATION REPRESENTATION
(AI/KE) (Perception)
| |
+------------------+ +------+--------+
| | | | |
RULE FRAME LOGIC OBJECTIVE SUBJECTIVE
BASED BASED BASED PERCEPTION PERCEPTION
(Knowledge) (Sensation)
|
+------------------+ Refers to the
Relates | | object by means
immediately to <-- INTUITION CONCEPT --> of a feature
the object | which several
+-------------+ things have in
| | common
Has its origin in PURE EMPIRICAL
the understanding alone <-- CONCEPT CONCEPT
(not in sensibility) (Notion)
|
A concept of reason <-- IDEA
formed from notions
and therefore transcending
the possibility of experience
This taxonomy tree of mental (re)presentations in a knowledge
system was drawn by Jon Cunnyngham of Genan Intelligent Systems
(Columbus, Ohio) after a group discussion on the following passage
from Kant's "Critique of Pure Reason" (B376-77):
The genus is representation in general (repraesentatio).
Subordinate to it stands representation with consciousness
(perceptio). A perception which relates solely to the subject
as the modification of its state is sensation (sensatio), an
objective perception is knowledge (cognitio). This is either
intuition or concept (intuitus vel conceptus). The former
relates immediately to the object and is single, the latter
refers to it mediately by means of a feature which several
things may have in common. The concept is either an empirical
or a pure concept. The pure concept, in so far as it has its
origin in the understanding alone (not in the pure image of
sensibility), is called a notion. A concept formed from
notions and transcending the possibility of experience is an
idea or concept of reason. Anyone who has familiarised
himself with these distinctions must find it intolerable to
hear the representation of the colour, red, called an idea.
It ought not even to be called a concept of understanding, a
notion.
A word of caution about the translation: First, the German
"Anschauung" is translated into English as "intuition." Contrary to
what my wife would have you think, this word should not be taken in
the sense of "woman's intuition" but rather in the sense of "raw
intake" or "input." Second, although "Einbildung" comes over to
English naturally as "image," the imaging faculty ("Einbildungskraft")
should only with caution be designated in English by "imagination,"
especially when we consider that the transcendental role of this
faculty is the central organizing factor in Kant's theory of the
human(oid) knowledge system. Third, the Norman Kemp Smith edition,
available through St. Martin's Press in paperback for somewhere in
the neighborhood of $15.00, is the best English translation, despite
the little problems I've just pointed out regarding "Anschauung" and
"Einbildung." The other translations pale in comparison to Smith's.
In view of all this, I'd like to add to Hayes's challenge:
Yes, there is a problem in the integration of perceptual (or should we
say "sense-based") and intellectual systems. But the solution is
already indicated in Kant's reconstruction of the human(oid) knowledge
system by the equating of "objective perception," "knowledge," and
"cognitio" (which, by the way, may or may not be equivalent to the
English use of "cognition"). The problem can be pinpointed more
exactly in this way: How can we force the system's objects to obey the
apriori structures of consciousness that are necessary for empirical
consciousness (awareness) of intelligible objects in a world, given to
a self. (The construct of a self in a sense-based system of objective
knowledge may seem to be a luxury, but without a self there can be no
object, hence no objective perception, hence no knowledge.)
What do we have now? Do we have intelligent systems?
Perhaps. Do we have knowledgeable systems? Maybe. Are they
conscious? No. The Hauptsatz for knowledge science is this:
"Knowledge is structured in consciousness." So investigate
consciousness and the self in the human, and then you'll have a basis
for (re)constructing it in a computerized knowledge system.
One more diagram that may be of help in unravelling all this:
Understanding Sensibility
|
E Knowledge Images
m of --------> Objects
p objects |
|
----------------------+-----------------------
T |
r Pure concepts Schemas Pure forms of
a (categories) --------> intuition
n and principles | (space and time)
s |
As was mentioned in an earlier posting to this newsgroup (V4 #157),
this diagram springs from a single sentence in the Critique (B74):
"Beide sind entweder rein, oder empirisch" (Both may be either pure
[transcendental] or empirical).
May I suggest that knowledge-system designers consider the
diagram in conjunction with the taxonomy tree of mental
representations. With these two diagrams in mind, two seminal
passages from the Critique (namely, B33-36 and B74-79) can now be
recognized for what they are: the basis for the design of integrated
sense/knowledge systems in the fifth and sixth generations. To be
sure, there is a lot of work to be done, but it can be done in a more
holistic way if the Critique is read as a design manual.
Tom Scott CSNET: scott@bgsu
Dept. of Math. & Stat. ARPANET: scott%bgsu@csnet-relay
Bowling Green State Univ. UUCP: cbosgd!osu-eddie!bgsuvax!scott
Bowling Green OH 43403-0221 ATT: 419-372-2636 (work)
------------------------------
Date: Sun, 13 Jul 86 23:16:27 PDT
From: kube%cogsci@berkeley.edu (Paul Kube)
Subject: Re: common sense
From Newman.pasa@Xerox.COM, AIList Digest V4 #165:
>...All three appear
>to believe that there is some magical property of human intelligence
>(Searle and Dreyfus appear to believe that there is something special
>about the biological nature of human intelligence) which cannot be
>automated, but none can come up with a reason for why this is so.
>
>Comments?? I would particularly like to hear what you think Searle or
>Dreyfus would say to this.
Searle and Dreyfus agree that human intelligence is biological (and so
*not* magical), and in fact believe that artificial intelligences
probably can be created. What they doubt is that a class of currently
popular techniques for attempting to produce artificial intelligence
will succeed. Beyond this, the scope of their conclusions, and their
arguments for them, are pretty different. They have given reasons for
their views at length in various publications, so I hesitate to post
such a short summary, but here goes:
Dreyfus has been heavily influenced by the existential
phenomenologists Heidegger and Merleau-Ponty. This stuff is extremely
dense going, but the main idea seems to be a reaction against the
Platonic or Cartesian picture of intelligent behavior as being
necessarily rational, reasoned, and rule-described. Instead,
attention is called to the vast bulk of unreflective, fluent, adaptive
coping that constitutes most of human interaction with the world.
That the phenomenology of this kind of intelligent behavior shows it
to not be produced by reasoning about facts, or applying rules to
propositional representations, etc., and that every system designed to
produce such behavior by these means has been brittle and not
extensible, are reasons to suppose that (1) it's not done that way and
(2) it can't be done that way. (These considerations are not intended
to apply to systems which are only rule-described at a sufficiently
subpersonal level, say at the level of weights of neuronal
interconnections. Last I heard, Dreyfus thinks that some flavors of
connectionism might be on the right track.)
Searle, on the other hand, talks about intentional mental states
(states which have semantic content, i.e., which are `about'
something), not behavior. His (I guess by now kind of classic)
Chinese Room argument is intended to show that no formal structure of
states of the sort required to satisfy a computational description of
a system will guarantee that any of the system's states are
intentional. And if it's not the structure of the states that does
the trick, it's probably what the states are instanced in, viz.
neurochemistry and neurophysiology, that lends them intentionality.
So, for Searle, if you want to build an artificial agent that will not
only behave intelligently but also really have beliefs, etc., you will
probably have to wire it up out of neurons, not transistors. (Anyway,
brains are the only kind of substance that we know of that produce
intentional states; Searle regards it as an open empirical question
whether it's possible to do it with silicon.)
Now you can think that these reasons are more or less awful, but it's
just not right to say that these guys have come up with no reasons at all.
Paul Kube
kube@berkeley.edu
...ucbvax!kube
------------------------------
Date: 14 Jul 86 09:42 PDT
From: Newman.pasa@Xerox.COM
Subject: Re: common sense
Thanks for the reply.
Dreyfus' view seems to have changed a bit since I last read anything of
his, so I will let that go. However, I suspect that what I am about to
say applies to him too.
I like your description of Searle's argument. It puts some things in a
clearer light than Searle's own stuff. However, I think that my point
still stands. Searle's argument seems to assume some "magical" property
(I really should be more careful when I use this term; please understand
that I mean only that the property is unexplained, and that I find its
existence highly unintuitive and unlikely) of biology that allows
neurons (governed by the laws of physics, probably entirely
deterministic) to produce a phenomena (or epiphenomena if you prefer -
intelligence) that is not producible by other deterministic systems.
What is this strange feature of neurobiology? What reason do we have to
believe that it exists other than the fact that it must exist if the
Chineese Room argument is correct? I personally think it much more
likely that there is a flaw somewhere in the Chineese Room argument.
>>Dave
------------------------------
Date: Mon 14 Jul 86 09:51:27-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Searle's Chinese Room
There is a lengthy rebuttal to Searle's Chinese Room argument
as the cover story in the latest Abacus. Dr. Rappaport claims
that human understanding (of Chinese or anything else) is different
from machine understanding but that both are implementations of
an abstract concept, "Understanding". I find this weak on three
counts:
1) Any two related concepts share a central core; defining this as the
abstract concept of which each is an implementation is suspect. Try
to define "chair" or "game" by intersecting the definitions of class
members and you will end up with inconsistent or empty abstractions.
2) Saying that machines are capable of "machine understanding", and
hence of "Understanding", takes the heart out of the argument. Anyone
would agree that a computer can "understand" Chinese (or arithmetic)
in a mechanical sense, but that does not advance us toward agreement
on whether computers can be intelligent. The issue now becomes "Can
machines be given "human" understanding.?" The question is difficult
even to state in this framework.
3) Searle's challege needn't have been ducked in this manner. I
believe the resolution of the Chinese Room paradox is that, although
Searle does not understand Chinese, Searle plus his hypothetical
algorithm for answering Chinese queries would constitute a >>system<<
that does understand Chinese. The Room understands, even though
neither Searle nor his written instruction set understands. By
analogy, I would say that Searle understands English even though his
brain circuitry (or homunculus or other wetware) does not.
I have not read the literature surrounding Searle's argument, but I
do not believe this Abacus article has the final word.
-- Ken Laws
------------------------------
End of AIList Digest
********************
∂16-Jul-86 1551 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #167
Received: from SRI-STRIPE.ARPA by SU-AI.ARPA with TCP; 16 Jul 86 15:46:34 PDT
Date: Wed 16 Jul 1986 11:42-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #167
To: AIList@SRI-AI
AIList Digest Wednesday, 16 Jul 1986 Volume 4 : Issue 167
Today's Topics:
Seminars - SIDESMAN Silicon Design System (CMU) &
Automata Theory, Nuprl Proof Development System (SRI),
Conference - AAAI Workshop on Parallel Models, Symbolic Processing &
3rd IEEE Conference on AI Applications
----------------------------------------------------------------------
Date: 9 Jul 1986 1227-EDT
From: Laura Forsyth <FORSYTH@C.CS.CMU.EDU>
Subject: Seminar - SIDESMAN Silicon Design System (CMU)
Wednesday, July 9th, 2:00 p.m.
Room 5409 Wean Hall
Professor Hilary J. Kahn
SIDESMAN
A Silicon Design System Which Has Knowledge Based Components
Hilary J. Kahn
Department of Computer Science
University of Manchester
Oxford Road
Manchester M13 9PL
England
The SIDESMAN system is a silicon design system which has the
following properties:
- Facilities to ensure that application processes are
technology adaptable
- Support for Knowledge Based CAD applications where
appropriate
- A range of tools to support a general silicon
compilation system
- Access to a specialist hardware simulation
machine
This walk will discuss the general structure and motivations behind the
SIDESMAN system and will briefly discuss some of the SIDESMAN application
processrs.
The work detailed is part of a current research project being undertaken
by H.J. Kahn and N.P. Filer
------------------------------
Date: Mon 14 Jul 86 11:54:31-PDT
From: Richard Waldinger <WALDINGER@SRI-WARBUCKS.ARPA>
Subject: Seminar - Automata Theory, Nuprl Proof Development System (SRI)
Title: Implementing Automata Theory within the Nuprl Proof Development
System
Speaker: Christoph Kreitz, Dept. of Computer Science, Cornell University
Time: Wednesday, 16 July, 4:15pm (Visitors from
outside please come to reception desk a little
early. Coffee at 3:45 in Waldinger office)
Place: EJ228 (New AI Center Conference Room) SRI
International, Building E
IMPLEMENTING AUTOMATA THEORY
with the
Nuprl Proof Development System
by
Christoph Kreitz
Department of Computer Science
Cornell University
Problem solving is a significant part of science and mathematics and
is the most intellectually significant part of programming. Nuprl is
a computer system which provides assistance with solving a problem.
It supports the creation of formulas, proofs and terms in a formal
theory of mathematics; with it one can express concepts associated
with definitions, theorems, theories, books and libraries. Moreover
the formal theory behind it is sensitive to the computational meaning
of terms, assertions and proofs, and the computer system is able to
carry out the corresponding actions. Thus Nuprl includes
computer-aided program development, but in a broader sense it is a
system for proving theorems and implementing mathematics.
The actual implementation of a mathematical theory, such as the theory
of finite automata, with the Nuprl proof development system gives lots
of insights into its strengths and weaknesses and shows that it is
powerful enough to obtain nontrivial results within reasonable amounts
of time.
The talk will give a brief overview of Nuprl, its object language and
inference rules (Type Theory), and of features of the computer system
itself. These features support partial automatization of the problem
solving process and extensions of the object language by a Nuprl user.
Details of the implementation of automata theory will be shown
afterwards. I will describe some of the techniques and extensions to
Nuprl which were necessary to formulate and prove theorems from
automata theory. In particular, these techniques keep Nuprl proofs
small and understandable. I will present a complete Nuprl proof of
the pumping lemma and an evaluation of its computational content as
performed on a computer. Finally an outline for possible future
developments is given.
------------------------------
Date: Mon, 14 Jul 86 16:44:58 edt
From: Beth Adelson <adelson@YALE.ARPA>
Subject: Conference - AAAI Workshop on Parallel Models, Symbolic Processing
WORKSHOP ON PARALLEL MODELS AND SYMBOLIC PROCESSING
Chair: Beth Adelson
The purpose of the workshop is to look at current connectionist models in
light of traditional AI problems. We will ask how the connectionist and the
traditional approaches inform and constrain each other. Several new
connectionist approaches to central AI problems will be presented. These new
approaches address some issues which have previously been important but
difficult in connectionism.
SCHEDULE:
Drew McDermott
Yale University
What AI Needs From Connectionism
Jerome Feldman
University of Rochester
Semantic Networks and Neural Nets
Geoffrey Hinton
Carnegie Mellon University
Connectionists Make Better Bayesians:
Bayesian Inference In A Connectionist Network
David Waltz
Thinking Machines
Challenges and Directions for Connectionism
Organizer: Beth Adelson
adelson@yale
Before July 26:
NSF
Washington, DC 20550
(202) 357-9569
After July 26:
Tufts University
Department of Computer Science
Medford, MA 02155
(617) 381-3214
Length: 3 hours:
Four 20 minute talks with 10 minutes for questions after each
One hour for audience discussion.
Date: August 14
Time: 1-4 PM
Place: Room 213 in the Law School
Attendees: Open to anyone registered at the conference
(but audience size is limited to 100)
------------------------------
Date: Fri 11 Jul 86 18:57:46-CDT
From: Jim Miller <HI.JMILLER@MCC.COM>
Subject: Conference - 3rd IEEE Conference on AI Applications
CALL FOR PAPERS
THE THIRD IEEE CONFERENCE ON
ARTIFICIAL INTELLIGENCE APPLICATIONS
ORLANDO HYATT REGENCY
ORLANDO, FLORIDA
FEBRUARY 22-28, 1987
SPONSORED BY THE IEEE COMPUTER SOCIETY
This conference is devoted to the application of artificial intelligence
techniques to real-world problems. Two kinds of papers are appropriate:
- Papers that focus on knowledge-based techniques that can be applied
effectively to important problems, and
- Papers that focus on particular knowledge-based application programs
that solve significant problems.
AI techniques include: Application areas include:
- Knowledge representation - Science and engineering
- Reasoning - Medicine
- Knowledge acquisition - Business
- Learning - Natural language
- Uncertainty - Intelligent interfaces
- General tools - Vision
- Robotics
Only new, significant, and previously unpublished work will be accepted. Two
kinds of papers may be submitted:
- Full papers: 5000 words maximum, describing significant completed
research.
- Poster session papers: 1000 words, describing interesting ongoing
research.
Both categories of papers will be reviewed by the Program Committee.
CONFERENCE COMMITTEE
General chair: Program committee chairs:
Jan Aikins, Aion James Miller and Elaine Rich, MCC
Program committee:
Jan Aikins, Aion Benjamin Kuipers, University of Texas
Byron Davies, Texas Instruments John McDermott, Carnegie-Mellon
William Clancey, Stanford University Charles Petrie, MCC
Keith Clark, Imperial College John Roach, Virginia Polytechnic
Michael Fehling, Teknowledge J. M. Tenenbaum, Schlumberger
Mark Fox, Carnegie-Mellon University Harry Tennant, Texas Instruments
Bruce Hamill, Johns Hopkins/APL Charles R. Weisbin, Oak Ridge
Peter Hart, Syntelligence Michael Williams, Intellicorp
Elaine Kant, Schlumberger
SUBMISSION INFORMATION
- Full length papers: Submit four copies of the paper by September 9,
1986 to the Program Committee chairs, listed below. The first page of
the paper should contain the author's (or authors') name, affiliation,
and address, a 100 word abstract, and a list of appropriate subject
categories, both AI topics and application areas. Conference sessions
may be organized around either kind of subject category. Authors are
not restricted to only those categories listed above. Accepted papers
will be allocated six manuscript pages in the proceedings.
- Poster session papers: Submit four copies of a 1000 word abstract by
December 1, 1986 to the Program Committee chairs, listed below.
Indicate on the front of the paper all appropriate subject categories.
Accepted abstracts will be reprinted and distributed at the
conference. In addition, authors of accepted poster session papers
will be provided with table space at the conference to display
examples of their work and to discuss their findings with others.
IMPORTANT DATES
- Full-length papers must be received by: September 9, 1986
- Authors notifications mailed: October 24, 1986
- Accepted full-length papers returned to IEEE for proceedings:
November 15, 1986
- Poster session papers must be received by: December 1, 1986
- Conference: February 22 - 28, 1987, Orlando, Florida
FOR FURTHER INFORMATION, CONTACT:
Jan Aikins James Miller
General Chair Elaine Rich
Third IEEE Conference on Program Committee Chairs
Artificial Intelligence Third IEEE Conference on
Applications Artificial Intelligence
Aion Corporation Applications
101 University Avenue MCC
Palo Alto, California 94301 9430 Research Blvd.
Austin, Texas 78759
------------------------------
End of AIList Digest
********************
∂18-Jul-86 1531 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #168
Received: from SRI-STRIPE.ARPA by SU-AI.ARPA with TCP; 18 Jul 86 15:31:16 PDT
Date: Fri 18 Jul 1986 11:21-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #168
To: AIList@SRI-STRIPE
AIList Digest Friday, 18 Jul 1986 Volume 4 : Issue 168
Today's Topics:
Queries - PC Expert-System Shell Demos &
Garbage Collection Side Effects & Online Almanacs,
Policy - Signing with Net Addresses,
Literature - LISP Texts & Natural Language,
AI Tools - Parallel FCP,
Review - July Spang Robinson Report
----------------------------------------------------------------------
Date: 14 Jul 86 06:56 PDT
From: A. Winsor Brown, Douglas Aircraft ET&IS <AWB.MDC@OFFICE-1.ARPA>
Subject: PC Expert-System Shell Demos
I am aware of the following low cost or no cost PC expert system shell demo
and/or full packages. I am going to use all of them in an internal course on
PC based shells/tools, and want each of the students to retain a copy (thus the
concern for low priced demonstrators or full systems). Since the course is to
address evaluation criteria, actually seeing the tool in use is important.
If you know of others would you please let me know, and include a source phone
number (and address if the provider was not listed in the 15 June "AI Software
for MS-DOS (Long List)"). I will summarize the responses and re-post them
here.
M.1 with a date of Oct'85 in the "Demonstration Materials" manual; from
Teknowledge 415/327-6640. Single disk plus report sent on request; freely
copiable for demos. Loading and saving knowledge base disabled in demo; only
five knowledge base additions allowed.
EXSYS Version 3 from EXSYS, Inc (Albuquerque, NM) 505/836-6676. Copying
and distribution encouraged; first copy costs $15. Learning tutorial
included with all demo's; full manual in softcopy form in the older 3
diskette version; hard copy form of manual with the newer 2 disk set which
also allows saving a 25 rule expert system.
Guru initial release (1.00c) from MDBS 317/463-2581. 4 disk set; sent on
request (not copy protected but further distribution not proscribed:
Copyright protection claimed for diskettes and 50 page "Demonstration
Instructions"). On-line help documentation is all that is provided.
Definitely need hard disk: 1.2Meg needed just for demo! Can develop small
(10 rule) single rule-set expert system with Demo; some other restrictions
apply to other parts.
1st Class version 3.0 (3/86) from Programs in Motion 617/653-5093. Demo
disk costs $20; re-distribution not currently desired/allowed. Manual not
included on diskette; some technical details provided on-line.
Personal Consultant version 1.00 from TI 800/527-3500. Available from
TI, at no charge; copy protected. 43 page "Demonstration Guide" and 22 page
"Technical Report" (on-line help; no documentation per se). Needs full
512K; can run off single floppy; crippled so user cannot save developed
system.
ESIE version 1.1 from Lightwave Consultants 813/988-5033. Shareware
(registration fee is $75). 25 page manual included on diskette.
Knowledge Delivery System version of 8/85 from KDS Corporation
312/251-2621. Available from KDS for $25; allowed to be reproduced and
distribution verbally proscribed. No manual; not clear about on-line help.
Development example limited to 20 cases (examples) from the normal 4096;
also has some text size restrictions.
Expert System version ??? from PPE 301/977-1489. A public domain tool;
available for $20 from PPE too. Manual situation not clear ("program is
self documenting" comment from another knowledgeable source). Source code
is included!
Thank you. --Winsor
------------------------------
Date: Tue, 15 Jul 86 23:46:59 CDT
From: David Chase <rbbb@rice.edu>
Subject: Query on compilers, optimization, and garbage collection
I am looking for references on interactions (good and bad, intended and
unintended) between garbage collectors and compilers that (attempt to) do
optimizations. For example, if you know of a good optimization that
reduces the amount of garbage produced, tell me about it. If you know of
an ugly surprise that someone received when they tried to optimize code in
a garbage-collected system, tell me about that.
I realize that this isn't exactly AI, but I think people reading this list
might have some pointers (to other lists, if nothing else).
What I already have (no references for ugly surprises):
"Optimization of Very High Level Languages-I: Value Transmission and
its Corollaries"
Schwartz, in Computer Languages, volume 1, pp 161-194 (1975)
(copy optimizations, heap->stack allocation conversions)
"Experience with the SETL Optimizer"
Freudenberger, Schwartz and Sharir, in TOPLAS 5:1 (January 1983)
(copy optimizations)
"Binding Time Optimization in Programming Languages: Some Thoughts
Toward the Design of an Ideal Language"
Muchnick and Jones, in POPL 3, 1976
(heap->stack allocation conversions)
"Shifting Garbage Collection Overhead to Compile Time"
Barth, in CACM 20:7 (July 1977)
(reference counting at compile time)
"RABBIT: A Compiler for SCHEME"
Steele, 1978
(heap->stack allocation conversions for activation records)
"Fast Arithmetic in MacLISP"
Steele, in 1977 Macsyma Users' Conference
(heap->stack allocation conversions for numbers)
"An Optimizing Compiler for Lexically Scoped Lisp"
Brooks, Gabriel and Steele, in Compiler Construction 1982
(heap->stack allocation conversions for numbers)
"A scheme of storage allocation and garbage collection for ALGOL 68"
Branquart and Levi, in Algol 68 Implementation (North-Holland, 1971)
(compiled marking routines)
"Methods of garbage collection for ALGOL 68"
Wodon, in Algol 68 Implementation (North-Holland, 1971)
(compiled marking routines)
David Chase
------------------------------
Date: Wed, 16 Jul 86 16:28 EST
From: LEWIS%cs.umass.edu@CSNET-RELAY.ARPA
Subject: almanacs and magnitudes
I would appreciate any information people could give me on the availability
of online almanacs or similar large bodies of broadranging statistical data.
Public domain or cheap would be preferable, of course. I also would be
interested in hearing about any work that has been done either on AI programs
for heuristically estimating the information one finds in almanacs, or on
psychological research on human order of magnitude estimates. So far the
only place I've seen this subject discussed are the entertaining diatribes
on "number numbness" by Douglas Hofstadter in Metamagical Themas, and by
Jon Bentley in a recent issue of CACM.
Please send replies to me; if there is sufficient interest I will summarize
for the digest.
Thanks, David D. Lewis
Univ. of Massachusetts, Amherst
"well, I used to think it was LEWIS@UMASS-CS
and lately it's been LEWIS%cs.umass.edu@CSNET-RELAY.ARPA
but maybe it's longer now"
------------------------------
Date: Mon, 14 Jul 86 16:21:04 cdt
From: Girish Kumthekar <kumthek%lsu.csnet@CSNET-RELAY.ARPA>
Subject: Replying to AIList messages
I have been reading messages and find them interesting.
However I find that most of the times, the direct address where the
reply can be sent is not given.
It is typically at the top of the message, and is probably mixed with
other details.
This forces you to type cntl-z to stop the viewing and come back and note
the address at the top.
So would you all please give your addresses at the end of messages.
(Note that at the top of message, one does'nt know if this message is
going to turn out interesting or not ! ).
My address is kumthek%lsu@csnet-relay.csnet
Thanks in advance
Girish Kumthekar (504)-343-5334
------------------------------
Date: Fri, 11 Jul 86 09:29:42 EDT
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: LISP texts (followup article)
Another good LISP text is the new one by my colleague Stuart C. Shapiro:
LISP: An Interactive Approach (Computer Science Press). It is
dialect-independent and intended for self-study. We have used it (in
both manuscript and published versions) for a number of years at SUNY
Buffalo, with great success.
------------------------------
Date: 14 JUL 86 16:34-EST
From: PJURKAT%SITVXA.BITNET@WISCVM.ARPA
Subject: REFERENCES ON NATURAL LANGUAGE
I'm a little slow reading my mail. This is a response to a query in
Vol 4 Issue 151 by Gene Guglielmo (sefai@nwc-143b asking for
references concerning the representation of natural language on a computer
system. Please pass the following on to him:
Parisi and Antonucci
Essentials of Grammar
It presents a represenation of sentences in functional form, that is,
predicate(arg1, arg2, ... )
taking into acoount a goodly amount of semantics. I have found it valuable,
especially for the analysis of belief systems.
Cheers - Peter Jurkat (pjurkat@sitvxa.bitnet)
------------------------------
Date: Fri, 11 Jul 86 09:35:52 -0200
From: Steve Taylor <steve%wisdom.bitnet@WISCVM.ARPA>
Subject: Parallel FCP
We are pleased to announce the availability of a parallel Flat
Concurrent Prolog (FCP) [1,2] interpreter for the Intel iPSC
Hypercube. The interpreter may be used for initial experiments with
parallel logic programming; it includes most of the kernel predicates
available in the Logix system.
FCP programs may be developed on a uniprocessor under the
Logix system, which is announced seperately [3]; this environment operates
on the VAX, SUN or Intel 310 systems. Recompilation allows the
resulting program to execute on the Intel iPSC hypercube. Simple
techniques have been developed to map processes and code to the
physical machine [4]. These techniques allow multiple virtual
machines to execute concurrently; multiple applications may execute
concurrently on a given virtual machine.
PLEASE NOTE: The interpreter is an experimental system which has only
recently been completed; it is being made available on an informal
basis to encourage members of the community to experiment with the
language.
The handling fee for a non-commercial license to the
Parallel FCP Interpreter and the Logix system for the 310 is
$250 U.S. To obtain a license form and/or a copy of the Logix user
manual write to:
Steve Taylor
Department of Computer Science
The Weizmann Institute of Science
Rehovot 76100, Israel
To obtain an electronic copy of the license write to:
CSnet, Bitnet: steve@wisdom
ARPAnet: steve%wisdom.bitnet@wiscvm.arpa
Sincerely,
Steve Taylor
References
[1] C. Mierowsky, S. Taylor, E. Shapiro, J. Levy and M. Safra, "The
Design and Implementation of Flat Concurrent Prolog", Weizmann
Institute Technical Report CS85-09, 1986.
[2] A. Houri and E. Shapiro, "A sequential abstract machine for Flat
Concurrent Prolog", Weizmann Institute Technical Report CS86-20,
1986.
[3] W. Silverman, M. Hirsch, A. Houri, and E. Shapiro, "The Logix
system user manual, Version 1.21", Weizmann Institute Technical
Report CS86-21.
[4] S. Taylor, E. Av-Ron and E. Shapiro, "A Layered Method for
Process and Code Mapping", Weizmann Institute Technical Report
CS86-17.
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.ARPA
Subject: July Spang Robinson Report summary
Summary of Spang Robinson Report July 1986, Volume 2, No. 7
(Emphasis on AI and Parallel Procesing)
Advanced Decision Systems is using the Butterfly for AI development.
They are developing a SCHEME using message passing for the Butterfly. They
are developing an expert system to balance work loads and manage faults.
The system works well on about 10 nodes but past these points, the system
performance does not continue to improve as processors are added. It also takes
20 minutes to reboot the machines.
Oak Ridge National Laboratories has been using the NCUBE for machine
vision research.
NASA is using a FLEX/32 parallel machine to develop an expert system
shell and an expert system to predict sun spot activity. CLIPS will
run on the FLEX/32 and is an OPS5-like system written in C. In the
sun spot sytem, the expert system part of the application will run on
the Symbolics with the math part running on the FLEX using parallelism.
LUCID is developing an implementation of parallel LISP under subcontract
to Stanford. The work is starting on a newly purchased Alliant Computer
Systems.
Cray Research has some proprietary AI projects in its Applications
department. ELXSI is looking for a client who needs AI on
mainframe class of machine. It also is looking for a vendor to port
AI language to ELXSI. Encore Computers and Masscop has active programs to
produce AI languages.
The Kemp-Carraway Heart institute is doing image analysis of echo
cardiograms using fuzzy logic. It has developed a FLOPS product in
which rules can fire in parallel and eliminates the need for "truth
maintenance" when rules do not have to be executed sequentially. The system
uses fuzzy logic with an OPS-5 syntax.
One Forth researcher claims to have designed a 1 million logical inference
per second expert system on the Novix NC4000 Forth engine (a $150.00 chip).
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
News section
Carnegie Group has signed an agreement with Hewlett-Packard to offer
its Knowledge Craft expert system shell on the HP 9000 Model 320.
Lisp Machines has made 18 changes to their machine to improve reliability.
They will have an AVP processor that will be twice the speed of their
current processor. They are also working on a LISP chip and on improvement
of the development environment for non-LISP machines.
Intellicorp has doubled its direct sales force and has established a VAR
relationship with AMOCO corporation.
TI has sold 1000 Explorer work stations of which 200 are in universities.
A key reason for Burroughs recent merger with Sperry is Sperry's AI activity.
------------------------------
End of AIList Digest
********************
∂18-Jul-86 2216 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #169
Received: from SRI-STRIPE.ARPA by SU-AI.ARPA with TCP; 18 Jul 86 22:16:43 PDT
Date: Fri 18 Jul 1986 11:35-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #169
To: AIList@SRI-STRIPE
AIList Digest Friday, 18 Jul 1986 Volume 4 : Issue 169
Today's Topics:
Natural Language - Interactive Architectures,
Philosophy - Common Sense & Intelligence Testing & Searle's Chinese Room
----------------------------------------------------------------------
Date: Mon, 14 Jul 86 16:15:01 BST
From: ZNAC450 <mcvax!kcl-cs!fgtbell@seismo.CSS.GOV>
Subject: Interactive Architectures & Common Sense
Subject: Re: Architectures for interactive systems?
In article <8607032203.AA12866@linc.cis.upenn.edu>
brant%linc.cis.upenn.edu@CIS.UPENN.EDU.UUCP writes:
>There seems to have been a great deal of work done in
>natural language processing, yet so far I am unaware of
>any attempt to build a practical yet theoretically well-
>founded interactive system or an architecture for one.
>
>When I use the phrase "practical yet theoretically well-
>founded interactive system," I mean a system that a user
>can interact with in natural language, that is capable of
>some useful subset of intelligent interactive (question-
>answering) behaviors, and that is not merely a clever hack.
>
>Many of the sub-problems have been studied at least once.
>Work has been done on various types of necessary response
>behavior, such as clarification and misconception correction.
>Work has been done on parsing, semantic interpretation, and
>text generation, and other problems as well. But has any
>work been done on putting all these ideas together in a
>"real" system?
I would like to try to build such a system but it's not going to
be easy and will probably take several years. I'm going to have to
build it in small pieces, starting off small and gradually improving
the areas that the system can cope with.
>I see a lot of research that concludes with
>an implementation that solves only the stated problem, and
>nothing else.
That's because the time taken to construct a sufficiently general system is
greater than most people are prepared to put in (measure it in decades),and
is so demanding on system resources that with present machines it will
run so slowly that the user gets bored waiting for a response (like UN*X :-)).
>Presumably, a "real user" will not want to
>have to run system A to correct invalid plans, system B to
>answer direct questions, system C to handle questions with
>misconceptions, and so forth.
>
No, what we ideally want is a system which can hold a conversation in real
time, with user models, an idea of `context', and a great deal of information
about the world in general. The last, by the way, is the real stumbling block.
Current models of knowledge representation just aren't up to coping with
large amounts of information. This is why expert systems, for example, tend
to have 3,000 rules or less. It is true that dealing with large amounts of
information will become easier as hardware improves and the LIPS (Logical
Inferences Per Second) rate increases. However, it won't solve the real
problem which is that we just don't know how to organise information in
a sufficiently efficient manner at present.
>I would be interested to get any references to work on such
>integrated systems.
If you want to solve the problem of building integrated NLP systems,
you are aiming to produce truly intelligent behaviour -- if you accept
the definition that AI is about performing tasks by machine which require
intelligence in humans. The problems of building integrated NLP systems
are the problems of AI, period. I.e.-- Knowledge representation, reasoning
by analogy, reasoning by inference, dealing with large search spaces,
forming user models etc.
I believe that in order to perform these tasks efficiently, we are going to
have to look at how people perform these tasks. What I mean by this is that
we are going to have to take a long hard look at the way the brain works --
down at the `hardware' level, i.e. neurons. The problem may well be that our
approach to AI so far has been too `high-level'. We have attempted to
simulate high-level activities of the human brain (reasoning by analogy,
symbol perception etc.) by high-level algorithms.
These simulations have not been unsuccesssful, but they have not exactly
been very efficient either.It is about time we stopped trying to simulate,
and performed some real analysis of what the brain does, at the bottom
level.If this means constructing computer models of the brain, then so
be it.
Two books which argue this point of view much better than I can are :
Godel, Escher, Bach : An Eternal Golden Braid, by Douglas R. Hofstadter,
and Metamagical Themas', also by Douglas R. Hofstadter.
>Also, what are people's opinions on this
>subject: are practical NLP too hard to build now?
No, but they are *very* hard to build. An integrated system would take
more resources than anyone is prepared to spend.
>Should we
>leave the construction of practical systems to private enter-
> prise and restrict ourselves to the basic research problems?
Not at all. If we can't build something useful at the end of the day
then we haven't justified the cost of all this effort. But a lot
more basic research has to be done before we can even think about
building a practical system.
----francis
mcvax!ukc!kcl-cs!fgtbell
Subject: Re: common sense
References: <8607031718.AA14552@ucbjade.Berkeley.Edu>
In article <8607031718.AA14552@ucbjade.Berkeley.Edu>
KVQJ@CORNELLA.BITNET.UUCP writes:
>My point is this, I think it is intrinically impossible to program
>common sense because a computer is not a man. A computer cannot
>experience what man can;it can not see or make ubiquitous judgements
>that man can.
What if you allow a computer to gather data from its environment ?
Wouldn't it be possible to make predictive decisions, based on what
had happened before ? Isn't this what humans do ?
I thought common sense was what allowed one to say what was *likely*
to happen, based on one's previous experiences. Is there some reason
why computers couldn't do this ?
-----francis
mcvax!ukc!kcl-cs!fgtbell
------------------------------
Date: Mon, 14 Jul 86 17:02:52 bst
From: Gordon Joly <gcj%qmc-ori.uucp@Cs.Ucl.AC.UK>
Subject: Blade Runner and Intelligence Testing (Vol 4 # 165).
The test used in the film is to look for an emotional response to the
questions. They are fired off in quick succession, without giving the
candidate time to think. He might then get angry...
> By the way, the fastest way to identify human
> intelligence may be to look for questions that a human will recognize
> as nonsense or outside his expected sphere of knowledge ("How long
> would you broil a 1-pound docket?" "Is the Des Moines courthouse taller
> or shorter than the Wichita city hall?") but that an imitator might try
> to bluff through. -- KIL
``Bluff''? What's the payoff?
Gordon Joly
INET: gcj%maths.qmc.ac.uk%cs.qmc.ac.uk@cs.ucl.ac.uk
EARN: gcj%UK.AC.QMC.MATHS%UK.AC.QMC.CS@AC.UK
UUCP: ...!seismo!ukc!qmc-ori!gcj
------------------------------
Date: Tue, 15 Jul 86 11:34:27 bst
From: Gordon Joly <gcj%qmc-ori.uucp@Cs.Ucl.AC.UK>
Subject: Blade Runner and Intelligence Testing (Vol 4 # 165) -- Coda
Interesting point about the imitator not being able to discover
what is a valid question and what is a piece of nonsense. Reminds
me of the theory of automatic integration in computer algebra.
The analogy is a bit thin, but basically the algebra system decides
first whether or not it has the power (ie there exists an algorithm)
before trying to proceed with the integration.
If fact, the machine never integrates; it just differentiates in a
clever way to get near to the answer. It then alters the result to
get the correct answer, and uses the inverse nature of differentiation
and integration. I said it was a bit thin; the integrator is working
backwards from the answer to find the correct question:-)
Gordon Joly
INET: gcj%maths.qmc.ac.uk%cs.qmc.ac.uk@cs.ucl.ac.uk
EARN: gcj%UK.AC.QMC.MATHS%UK.AC.QMC.CS@AC.UK
UUCP: ...!seismo!ukc!qmc-ori!gcj
------------------------------
Date: Mon, 14 Jul 86 21:17:10 est
From: Perry Wagle <wagle%iuvax.indiana.edu@CSNET-RELAY.ARPA>
Subject: common sense
[this is a response to ucbjade!KVQJ's note on common sense. ]
The flaw in Searle's Chinese Room Experiment is that he gets bogged down
in considering the demon to be doing the "understanding" rather than the
formal rule system itself. And of course it is absurd to claim that the
demon is understanding anything -- just as it is absurd to claim that the
individual neurons in your brain are understanding anything.
Perry Wagle, Indiana University, Bloomington Indiana.
...!ihnp4!inuxc!iuvax!wagle (USENET)
wagle@indiana (CSNET)
wagle%indiana@csnet-relay (ARPA)
------------------------------
Date: Tue, 15 Jul 86 10:57:50 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: common sense
In article <860714-094227-1917@Xerox>, Newman.pasa@XEROX.COM asks:
>
> However, I think that my point still stands. Searle's argument seems to
> assume some "magical" property ... of biology that allows neurons ...
> to produce a phenomenon ... that is not producible by other
> deterministic systems.
>
> What is this strange feature of neurobiology?
I believe that the mysterious factor is not literally "magic" (in your
broad sense), but merely "invisible" to the classical scientific method.
A man's brain is very much an ←interactive← system. It interacts con-
tinually with all of the world that it can sense.
On the other hand, laboratory experiments are designed to be closed
systems. They are designed to be controllable; they rely on artificial
input, at least in the experimental stage. (When they are used in the
field, they may be regarded as intelligent; even a door controlled by
an electric eye meets our intuitive criterion for intelligence.)
Just what do we demand of "artificial intelligence?" Opening doors
for us? Writing music and poems for us? Discoursing on philosophy
for us? --Or doing things for ←itself,← and to Hell with humans?
I don't think that A.I. people agree about this.
------------------------------
Date: 15 Jul 86 08:16:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Searle and Understanding
This is in response to recent discussion about whether AI systems
can/will understand things as humans do. Searle's Chinese room
example suggests the extent to which the implementation of a formal
system may or may not understand something. Here's another,
perhaps simpler, example that's been discussed on the philosophy
list.
Imagine we are visited by ETS - an extra-terrestial scientist.
He knows all the science we do plus a lot more - quarks,
quantum mechanics, neurobiology, you-name-it. Being smart,
he quickly learns our language and studies our (pitifully
primitive) biology, so he knows about how we perceive as well.
But, like all of his species, he's totally color-blind.
Now, making the common assumption that color-knowledge cannot
be conveyed verbally or symbolically, does ETS "understand"
the concept of yellow?
I think the example shows that there are two related meanings
of "understanding". Certainly, in a formal, scientific sense,
ETS knows (understands-1) as much about yellow as anyone - all
the associated wavelengths, retinal reactions, brain-states,
etc. He can use this concept in formal systems, manipulate it,
etc. But *something* is missing - ETS doesn't know
(understand-2) "what it's like to see yellow", to borrow/bend
Nagel's phrase.
It's this "what it's like to be a subject experiencing X" that
eludes capture (I suppose) by AI systems. And I think the
point of the Chinese room example is the same - the system as
a whole *does* understand-1 Chinese, but doesn't understand-2
Chinese.
To get a bit more poignant, what systems understand-2 pain?
Would you really feel as guilty kicking a very sophisticated
robot as kicking a cat? I think it's the ambiguity between
these senses of understanding that underlies a lot of the debate.
They correspond somewhat to Dennett's "program-receptive" and
"program-resistant" properties of consciousness.
As far as I can see, the lack of understanding-2 in artificial
systems poses no particular barrier to their performance.
Eg, no doubt we could build a machine which in fact would
correctly label colors - but that is not a reason to suppose
that it's *conscious* of colors, as we and some animals are.
Nonetheless, *even if there are no performance implications*,
there is a real something-or-other we have going on inside us
that does not go on inside Chinese rooms, robots, etc., and no
one knows how even to begin to address the replication of this
understanding-2 (if indeed anyone wants to bother).
John Cugini <Cugini@NBS-VMS>
------------------------------
Date: Tue 15 Jul 86 12:31:07-PDT
From: Pat Hayes <PHayes@SRI-KL>
Subject: Re: AIList Digest V4 #166
re: Searle's chinese room
There has been by now an ENORMOUS amount of discussion of this argument, far
more than it deserves. For a start, check out the BBS treatment surrounding
the original paper, with all the commentaries and replies.
Searle's position is quite coherent and rational, and ultimately
whether or not he is right will have to be decided empirically, I
believe. This is not to say that all his arguments are good, but
that's a different question. He thinks that whatever it is about the
brain ( or perhaps the whole organism ) which gives it the power of
intentional thought will be something biological. No mechanical
electronic device will therefore really be able to *think about* the
world in the way we can. An artificial brain might be able to, it's
not a matter of natural vs. artificial, notice: and it's just possible
that some other kind of hardware might support intentional thinking,
although he believes not; but certainly, it can't be done by a
representationalist machine, whose behavior is at best a simulation of
thought ( and which, he believes, will never in fact be a successful
simulation ). Part of this position is that the behavior of a system
is no guide to whether or not it is *really* thinking. If his closest
friend died, and an autopsy revealed, to Searles great surprise, that
he had been a computational robot all his life, then Searle would say
that the man hadn't been aware of anything all along. The 'Turing test'
is quite unconvincing to Searle.
This intellectual position is quite consistent and impregnable to argument.
It turns ultimately on an almost legal point: if a robot behaves
'intelligently', is that enough reason to attribute 'intelligence'
to it? ( Substitute your favorite psychological predicate. ) Turing and his
successors say yes, Searle says no. I think all we can do is agree to
disagree for the time being. When the robots get to be more convincing, let's
come back and ask him again ( or send one of them to do it ).
Pat Hayes
------------------------------
End of AIList Digest
********************
∂19-Jul-86 0036 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #170
Received: from SRI-STRIPE.ARPA by SU-AI.ARPA with TCP; 19 Jul 86 00:36:45 PDT
Date: Fri 18 Jul 1986 13:52-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #170
To: AIList@SRI-STRIPE
AIList Digest Saturday, 19 Jul 1986 Volume 4 : Issue 170
Today's Topics:
Philosophy - Creativity and Analogy & Life and Intelligence &
Gibson's Theory of Perception & Representationalist perception,
Humor - Circular Reasoning as a Tool
----------------------------------------------------------------------
Date: Tue, 15 Jul 86 09:31 EST
From: MUKHOP%RCSJJ%gmr.com@CSNET-RELAY.ARPA
Subject: Creativity and Analogy
I believe that Jay Weber and I mostly agree on the relation between an
abstraction and an analogy as well as the relation between the respective
spaces of abstractions and analogies (linguistic "slipperiness"
notwithstanding). What I disagreed with is the notion of some absolute
abstraction hierarchy implicit in Jay's comments:
> ... Each analogy corresponds to a node in an abstraction hierarchy which
> relates all of the sub-categories, SO THE SPACE OF ANALOGIES MAPS ONTO THE
> SPACE OF ABSTRACTIONS,.....
The distinction between an absolute abstraction hierarchy and multiple
abstraction lattices (the term I used in an earlier communication) is central
to the discussion of creativity, that is if you accept that creativity is
the art of making INTERESTING analogies (or abstractions).
Implicit in this definition is a choice between candidate analogies--a choice
not available in an abstraction hierarchy. In all fairness, Jay never states
explicitly that the world can only be represented by a single abstraction
hierarchy.
> Proper scientists (by definition) do not construct theories about things
> that cannot be empirically examined, e.g. using structure mapping functions
> to model the communal descriptive definition of the English word
> "creativity". Scientists pick testable domains such as problem solving
> where you can test predictions of a particular theory with respect to
> correct problem solving.
I am surprised by Jay's definition of "proper scientists". As to modeling
the communal descriptive definition of "creativity", how else could one begin
to emulate this elusive property? I am surprised at his choice of a model
problem for "proper scientists"--something as general as
problem solving. If problem solving by induction or by analogy are proper
domains, why isn't problem solving by "creativity" acceptable? The fact that
the word means slightly different things to different people does not justify
its exclusion from the class of "proper domains". It is fairly obvious that
we have similar perceptions about what the word "creativity" means--how
else could we be having this discussion?
> In the past, scientists have left debate over
> such concepts as "truth" and "beauty" to philosophers, and I think we
> should do the same with "creativity" and "intelligence".
Who are the "we" in this sentence? If "we" refers to the AIList, doesn't
that include philosophers interested in AI?
> In Cognitive Science, researchers have too often exaggerated the impact
> of their work through the careless and unscientific use of such terms.
What is the lesson to be learnt here? Do not use words like "creativity"
that sound pompous? If I want to develop a program that has this interesting
property I will need to give this property a name. What would be more natural
than "creativity"?
------------------------------
Date: Wed, 16 Jul 86 21:54:42 PDT
From: larry@Jpl-VLSI.ARPA
Subject: Definitions of Life, Intelligence, and Creativity
Yes, even defining "intelligence" and "creativity" is very difficult, much
less studying their referents scientifically. But I think it's possible.
General systems theory helps, despite some extravagances and errors its
followers have committed. (Stavros McKrakis pointed out a paper to me by
Berliner that discusses some of the worst.) It resolves the difference
between reductionism and mysticism in a useful way, by raising the status of
information to a physical metric as important as space, time, charge, etc.
GST focuses on the fact that when parts are bound together, interaction
effects bring into existence characteristics which none of the parts possess.
Science is organized around this, with physics concentrating on atomic and
subatomic domains, chemistry concentrating on molecular interactions, and
so on. The universe is divided up into layers of virtual machines, and for
the same reason we do it in computer science: intellectual parsimony. The
biologist, for instance, doesn't have to know whether the hydrogen atoms in
a sample of water have one, two, or three neutrons. Water functions much the
same regardless. (There ARE fascinating and subtle differences some
researchers are investigating.)
Definition (and investigation) of intelligence and creativity are bound up
with another "impossible to define" word: life. "Life" is a label I give to
systems which maintain their existence in hostile environments by continuously
remaking themselves. Over a period of time (sometimes quoted as seven years
for humans), each organism exchanges all of its individual atoms with the
environment. Yet it still "lives" and "has the same identity" because its
pattern is (essentially) the same.
Obviously each organism must somehow "know" the pattern it must maintain
and the safe limits for change before corrective action is taken. Biologists
have concluded that genes (and gene-like adjuncts outside them) don't contain
enough information. Studies point to the conclusion that some of this
information is stored in the universe itself, in the form of natural laws
for instance.
Additionally the organism must be able to sense itself, compare itself with
the desired pattern, and take action to correct for deviations. In some cases
it acts on its environment (pushing away a danger, for instance); in others it
acts on itself (say, standing tall and bristling to frighten attackers).
"Intelligence" I would define in very general terms: storing information that
describes an organism's external and internal universe, comparing and other-
wise processing information in the service of its survival and health, and
controlling its action. (Obviously, this definition could be formalised and
made more precise, but it will do as a first cut.)
It may be protested that these terms are too general, that too many things
would thus be classified as alive and/or intelligent. I would say that it's
more important to subclassify intelligence and study the interactions and
limits of different kinds of intelligence, to study the physical bases of
intelligence. I see nothing wrong with saying that a computer program of the
Game of Life is really alive (in a very restricted and limited sense which can
be couched in formal terms) or that a virus has (very limited, specific kinds)
of intelligence. I see it as useful parsimony that intelligence is defined as
a multi-dimensional continuum with protozoa near one end, humans in the middle
on many continua, and who knows what at the upper end(s).
"Creativity" is a particular kind of intelligence. It can be recognized by its
products: ideas, actions, or objects that did not exist before. This is not
an absolute criteria; it's not all that rare for even those we recognize
as geniuses to create the same idea independently (or as much as humans can be
who are working in the same field). There are middle and low grades of
creativity as well: the same "Chicken Kiev" jokes conceived by hundreds of
people on the same day, for instance.
Obviously, these new things don't appear from nowhere. There are conservation
laws in thought as well as in physics (though very different ones). These
novelties are made up of percepts/concepts already in memory, selected and
bound to create a system with emergent properties that convince us (or don't)
that we've come across something original. (I've gone into the dynamics of
creativity in a previous message and won't repeat myself.)
Larry @ jpl-vlsi
------------------------------
Date: 15 Jul 1986 11:06 EDT
From: ihnp4!mtuxo!hfavr@ucbvax.berkeley.edu
Subject: Gibson's theory of perception
I have not read Kelley's book, but as a psychologist I am familiar with
Gibson's "environmental" (or "ecological") theory of perception. In the
standard contemporary conceptualization of perception, from which Gibson
dissented, the input to the perceptual process is thought to be the
sensory impression; for example, in visual perception, the pattern of
retinal stimulation. According to the standard theory, the task of the
perceptual system is to derive, from that pattern, a representation
whose features are analogous to those features of the environment which
originally caused the retinal pattern. If the perceptual system is
thought of as physically limited to the eye and the brain, the standard
view is close to being a logical necessity. It is from this
conceptualization that Gibson dissented.
In Gibson's view, the perceptual system is not limited to the confines
of the organism, but extends into the environment. In the course of its
evolution, the organism has assimilated physical mechanisms present in
its natural environment to function as integral parts of its perceptual
system. Thus, the perceptual processes implemented in the eye and the
brain have evolved to function as the back-end of an integral process of
perception that begins at the perceived object. In this view, the
natural light sources present in the environment, the reflective
properties of the surfaces of objects, and the optical characteristics
of the atmosphere are as much a part of the human perceptual system as
the eyes and the brain. Thus, the retinal stimulation pattern is not the
input to perception, but rather an internal stage in the process. The
input to the perceptual process is the object itself; the output is the
organism's awareness of the object. The information contained in this
awareness is the original, and not a re- (or transformed), presentation
of the object to consciousness.
According to Gibson, the experimental psychologist's laboratory use of
two-dimensional representations, tachistoscopic stimuli, illusions, and
other materials that were not part of the ecological environment in
which the human perceptual system evolved, amounts to studying the human
perceptual system with some of its key parts removed. This is rather
like trying to find out how a computer works after pulling out some of
its chips, or deducing normal physiology from the results of the
surgical removal of organs. To yield valid information, the results of
such experiments must be interpreted with special attention to the fact
that one is not studying an intact or properly functioning system.
Adam Reed (ihnp4!npois!adam)
------------------------------
Date: Wed 16 Jul 86 16:56:49-PDT
From: John Myers <JMYERS@SRI-STRIPE.ARPA>
Subject: Re: AIList Digest V4 #166
I do not believe a concept of self is required for perception of objects.
Concepts needed for the perception of objects include temporal consistency,
directional location, and differentiation; semantic labeling (i.e., "meaning"
or "naming") is also useful. None of these require the concept of a self
who is doing the perceiving.
The robots I work with have no concept of self, and yet they are quite
successful at perceiving objects in the world, constructing an internal world
model of the objects, and manipulating these objects using the model. (Note
that a "world model" consists of objects' locations and names--temporal
consistency is assumed, and differentiation is implicit. Superior world
models include spatial extent and perceptual features.) I would argue that
they are moving by "reflex"--without true understanding of the "meaning" of
their motions--but they certainly are able to truly perceive the world around
them. I believe lower-level life-forms (such as amoebas, perhaps ants) work
in the same manner. When such-and-such happens in the world, run program FOO
which makes certain motions with the effectors, which (happens to) result in
"useful things" getting accomplished.
I think this describes what most of consciousness is: (1) being able to
perceive things in the environment, (2) knowing the meaning of these things,
and (3) being able to respond in an appropriate manner. Notice that all of
these concepts are vague; different degrees of 1,2,3 represent different
degrees of consciousness.
Self-consciousness is more than consciousness.
The concept of self is not required for conscious beings, and it certainly
is not required for perception.
John Myers~~
------------------------------
Date: Thu, 17 Jul 86 18:10:29 -0200
From: Eyal mozes <eyal%wisdom.bitnet@WISCVM.ARPA>
Subject: Re: Representationalist perception
David Sher writes:
>I may be confused by this argument but as far as visual perception is
>concerned we are certainly not aware of the firing rates of our individual
>neurons. We are not even aware of the true wavelengths of the light that
>hits our eyes. We have special algorithms built into our visual hardware
>that implements an algorithm that decides based on global phenomena the
>color of the light in the room and automatically adjusts the colors of
>percieved objects to compensate (this is called color constancy). However
>this mechanism can be fooled. Given that we don't directly percieve
>the lightwaves hitting our eyes how can we be directly percieving objects
>in the world?
That's exactly the point. We DON'T perceive lightwaves, images or
neuron firing-rates; we directly perceive external objects. The light
waves, our eyes, and the neural mechanisms (which are MECHANISMS, not
algorithms) are not the objects of our perception; they are the MEANS
by which we perceive objects. This will seem implausible only if you
accept the diaphanous model of awareness.
Stephen Barnard writes:
>Consider what happens when we look at a realistic
>painting. We can, at one level, see it as a painting, or we can see
>it as a scene with no objective existence whatsoever. How could this
>perception possibly be interpreted as anything but an internal
>representation?
Sorry, I can't follow your argument. Of course, a realistic painting is
a representation; but it is not an INTERNAL representation. Gibson's
books do contain long discussions of paintings; but he specifically
distinguishes between looking at a painting (in which case you are
perceiving a representation of the object) and directly perceiving the
object itself.
>Gibson emphasized the richness of the visual stimulus,
>arguing that much more information was available from it than was
>generally realized. But to go from this observation to the conclusion
>that the stimulus is in all cases sufficient for perception is clearly
>not justified.
Gibson did not deny that there are SOME cases (for example, many
situations created in laboratories) in which the stimulus is
impoverished. His point was that these cases are the exception, rather
than the rule. Even if we agree that in those exceptional cases there
is some inference from background knowledge, this doesn't justify
concluding that in the normal cases, where the stimuli do uniquely
specify the external object, inference also goes on.
Since I can't possibly do justice to these issues in a short electronic
message, let me repeat my recommendation of Kelley's book. It
discusses all these issues in detail, and presents them very clearly.
I'm sure it will be of great value even to those who'll end up
disagreeing with its conclusions.
Eyal Mozes
BITNET: eyal@wisdom
CSNET and ARPA: eyal%wisdom.bitnet@wiscvm.ARPA
UUCP: ..!ucbvax!eyal%wisdom.bitnet
------------------------------
Date: Fri 11 Jul 86 10:42:05-CDT
From: David Throop <AI.THROOP@R20.UTEXAS.EDU>
Subject: Circular Reasoning as a Tool
CIRCULAR-REASONER: A Knowledge-Representation Tool for Couches
The classic question "How long will it take my brother-in-law and his
friend Larry to get the couch from the living room, around a tight corner
and into the guest bedroom?" has inspired several advances in AI knowledge
representation. The spatial and temporal aspects of the problem have
proved particularly difficult.
Early work in logic representations was able to show that (Couch X) could
be unified with (Furniture X) and push the intractable aspects back a level
of abstraction. Rule based systems were able to diagnose Larry's wrenched
back after the first attempt, and show that if anybody ever solved the
intractable spatial problems, they should leave the answer in the knowledge
base. Frame based systems showed that intractable problems could be pushed
back a further level through inheritance. Causal reasoning systems can
reason about all of the possible behaviors of the couch as it undergoes the
process of being shoved around the corner, and move the temporal and
spatial questions back into a meta-knowledge-base.
I propose to generalize these methods for pushing back hard problems. In
particular the program CIRCULAR-REASONER represents these four knowledge
representation systems as a linked list. This linked list can be NCONCed
to itself so that each level, another representation is just around the
corner. Spatial and temporal aspects can be handled by routines that
access this list recursively, so that hard problems can be sent away and
never come back.
David Throop
------------------------------
End of AIList Digest
********************
∂22-Jul-86 1340 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #171
Received: from SRI-STRIPE.ARPA by SU-AI.ARPA with TCP; 22 Jul 86 13:40:14 PDT
Date: Tue 22 Jul 1986 10:02-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #171
To: AIList@SRI-STRIPE
AIList Digest Tuesday, 22 Jul 1986 Volume 4 : Issue 171
Today's Topics:
Queries - Geometric Placement & Oceanic or Weather References &
Common Lisp and Prolog on VM/CMS & KB System Verification and Validation,
Philosophy - Conservation Laws for Thought & Interactive Systems,
AI Tools - Catalogue of AI Tools
----------------------------------------------------------------------
Date: 19 Jul 1986 14:36-EDT
From: Carlos.Bhola@spice.cs.cmu.edu
Subject: Query - Geometric Placement
Query: Does anyone know about any expert system (developed
or under development) that relates to the placement
of geometric objects in a plane? Examples of the
problem would be pagination, VLSI layout, etc.
-- Carlos.
------------------------------
Date: 21 Jul 86 14:30:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: request for oceanic or weather references
Does anyone know of AI or expert systems work being done whose
application domain is oceanography or atmospheric science?
I'd appreciate any pointers - please send directly to me.
John Cugini <Cugini@NBS-VMS>
Institute for Computer Sciences and Technology
National Bureau of Standards
(301) 921-2431
[The 1985 IEEE conference on Expert Systems in Government had
a session on Environment and Weather. There was also a short
article on prospects of AI in weather forecasting in Aviation
Week & Space Technology, April 7, 1986, pp. 143-146. RuleMaster
has been used in some experimental atmospheric domains; contact
Radian Corp. There have also been some efforts at combining the
two fields -- expert systems to route ships; contact Dr. James
Mays of Micronautics, Inc., 70 Zoe Street, Suite 200, San Francisco,
CA 94107, (415) 896-6764. -- KIL]
------------------------------
Date: Sun, 20 Jul 86 18:30:10 edt
From: dsn@vorpal.cs.umd.edu (Dana Nau)
Subject: Common Lisp and Prolog on VM/CMS
We are looking for implementations of Common Lisp and Prolog that run under
VM/CMS, to be used for undergraduate/graduate instruction. I'd appreciate
any information people might have about the following:
(1) Does anyone know of a decent Common Lisp that runs under VM/CMS?
(IBM Lisp isn't suitable for our purposes, since it's rather different
from Common Lisp).
(2) Has anyone had experience using IBM Prolog? How natural would it be to
use for one accustomed to C-Prolog (or any other Prolog that uses the
syntax described in Clocksin and Mellish)?
(3) Does anyone know of a Prolog that runs under VM/CMS that's closer to
C-Prolog? Or, for that matter, does anyone know whether C-Prolog can
be made to run under VM/CMS?
------------------------------
Date: 22 Jul 86 12:50 EST
From: AIMAGIC%SCOM08.decnet@ge-crd.arpa
Subject: KB System Verification and Validation
I am an employee with GE Space Systems and am in the process of doing
work on an IR&D in the area of AI development methodologies or the lack
there of. As part of this effort, I am trying to determine what already
exists, what is available for purchase, and what can be had for the asking.
I have be doing extensive research in this area but have found little
information specifically directed at Knowledge-Based System Verification and
Validation. As a result, I have had to come up with my own concepts of what
can be done. For example, I have determined that a Knowledge-Based Software
Development Environment (KBSDE) consisting of a number of complementary
tools is essential. The tools themselves should support all phases of the
development life cycle. For KBS systems the question becomes, What is its
Life Cycle, how closly does KBS system development parallel that of "normal"
procedural/algoritmetic software? How do we account for rapid prototyping
while at the same time exercising the control necessary to create a reliable
software system?
I have made some progress in answering these questions and would be
willing to share my discoveries with the general community, if I could in
turn find out what is already out there and thus need not be rediscovered.
If it is at all possible, could this request be posted and responses sent
to me? If this could be done, I would gladly summarise my results for
general reading after sufficient responces were received.
Thanks:
Phil Rossomando
GE Space Systems Devision
King of Prussia, PA.
AIMAGIC%SCOM08.decnet@ge-crd.arpa
------------------------------
Date: Mon, 21 Jul 86 13:05:55 PDT
From: ANDREWS%EAR@ames-io.ARPA
Subject: Conservation Laws for Thought??? (AIList Digest V4 #170)
In his July 16 missive on Life, Intelligence, and Creativity definitions,
Larry makes a statement (highlighted below in upper case) which is quite
provocative to a mechanical engineer like myself.
> "Creativity" is a particular kind of intelligence. It can be recognized by its
> products: ideas, actions, or objects that did not exist before. ...
> Obviously, these new things don't appear from nowhere. THERE ARE CONSERVATION
> LAWS IN THOUGHT AS WELL AS IN PHYSICS (THOUGH VERY DIFFERENT ONES).
Have I missed something somewhere? If there are "thought conservation laws",
could someone please provide me with some references? And if nothing has been
documented, could someone please fill me in? I understand the concept of
conservation of mass and energy (what goes in - what comes out = increase in
amount stored), and the "bookkeeping" associated with entropy production,
transfer, and storage, but I have never heard of an application of those ideas
to human thought. I'm undecided about whether to be excited or depressed.
Help!
Alison Andrews
NASA Ames Research Center
andrews%ear@ames-io.arpa
------------------------------
Date: Monday, 21 Jul 1986 11:42:07-PDT
From: cherubini%cookie.DEC@decwrl.DEC.COM (RALPH CHERUBINI
CX01-2/N22)
Subject: Interactive Systems
Response to 14 Jul "Architectures for interactive systems?"
For a very provocative couple of hours relating to modes of interaction,
user models, contexts...I suggest people get a copy of the videotape of
the movie "Being There". I have found it very suggestive, based as it
is on a central character who has a very limited repertoire of
responses. I think there is a great deal to be learned from the
models of interactions which are both explicit and implicit in the
film. I'd be interested to hear reactions.
Ralph Cherubini
Digital Equipment Corporation
[For those who haven't seen it, Being There stars Peter Sellers as a
retarded man who is forced into the world by the death of the wealthy
man who had sheltered him. He enters the world full-grown, with no
traceable past, dressed in expensive clothes, and interested in little
except gardening and watching television. His great talent is that
he listens very intently, with no hidden agenda of things he'd like to
say or places he'd rather be -- hence the title. People mistake his
laconic replies, particularly his references to gardening, as deep
philosophical thought -- as with the Eliza/Doctor program. He finds
shelter with a millionaire, a political "king-maker", who introduces
Sellers to all the right people and fosters this image of precious
eccentricity and deep insight. The few who realize Seller's true
nature are either unable or unwilling to break the illusion. -- KIL]
------------------------------
Date: Fri, 18 Jul 86 16:57:30 -0100
From: Alan Bundy <bundy%aiva.edinburgh.ac.uk@Cs.Ucl.AC.UK>
Subject: Catalogue of AI Tools
THE CATALOGUE OF ARTIFICIAL INTELLIGENCE TOOLS
Alan Bundy
The Catalogue of Artificial Intelligence Tools is a kind of
mail order catalogue of AI techniques and portable software. Its
purpose is to promote interaction between members of the AI community.
It does this by announcing the existence of AI tools, and acting as a
pointer into the literature. Thus the AI community will have access
to a common, extensional definition of the field, which will: promote
a common terminology, discourage the reinvention of wheels, and act as
a clearing house for ideas and software.
The catalogue is a reference work providing a quick guide to
the AI tools available for different jobs. It is not intended to be a
textbook like the Artificial Intelligence Handbook. It,
intentionally, only provides a brief description of each tool, with no
extended discussion of the historical origin of the tool or how it has
been used in particular AI programs. The focus is on techniques
abstracted from their historical origins.
The original version of the catalogue, was hastily built in
1983 as part of the UK SERC-DoI, IKBS, Architecture Study. It has now
been adopted by the UK Alvey Programme and is both kept as an on-line
document undergoing constant revision and refinement and published as
a paperback by Springer Verlag. Springer Verlag have agreed to reprint
the Catalogue at frequent intervals in order to keep it up to date.
The on-line and paperback versions of the catalogue meet
different needs and differ in the entries they contain. In
particular, the on-line version was designed to promote UK interaction
and contains all the entries which we received that meet the criteria
defined below. Details of how to access the on-line version are
available from John Smith of the Rutherford-Appleton Laboratory,
Chilton, Didcot, Oxon OX11 OQX. The paperback version was designed to
serve as a reference book for the international community, and does
not contain entries which are only of interest in a UK context.
By `AI techniques' we mean algorithms, data (knowledge)
formalisms, architectures, and methodological techniques, which can be
described in a precise, clean way. The catalogue entries are intended
to be non-technical and brief, but with a literature reference. The
reference might not be the `classic' one. It will often be to a
textbook or survey article. The border between AI and non-AI
techniques is fuzzy. Since the catalogue is to promote interaction
some techniques are included because they are vital parts of many AI
programs, even though they did not originate in AI.
By `portable AI software' we mean programming languages,
shells, packages, toolkits etc, which are available for use by AI
researchers outside the group of the implementor, including both
commercial and non-commercial products. To obtain a copy of software,
do NOT write to us or the contributor of the entry; look at the
`Availability' field or write to the implementor. We hope that (s)he
will supply sufficient documentation for the system to be used by an
outsider, but do not expect non-commercial products to be as
professionally polished as commercial ones.
We have not included in the catalogue separate entries for
each slight variation of a technique, programming language, etc.
Neither have we always included details of how to obtain the software,
nor descriptions of AI programs tied to a particular application, nor
of descriptions of work in progress. The catalogue is not intended to
be a dictionary of AI terminology nor to include definitions of AI
problems.
Entries are short (abstract length) descriptions of a
technique or piece of software. They include a title, list of
aliases, contributor's name, paragraph of description, information on
availability and references. The contributor's name is that of the
original contributor of the entry. Only occasionally is the
contributor of the entry also the implementor of the software or the
inventor of the technique. The `Availability' field or the reference
are a better guide to the identity of the implementor or inventor.
Some entries have been subsequently modified by the referees and/or
editorial team, and these modifications have not always been checked
with the original contributor, so (s)he should not always be held
morally responsible, and should never be held legally responsible.
If you would like to submit an entry for the catalogue then
please fill in the attached form and send it to:
Alan Bundy,
Department of Artificial Intelligence,
University of Edinburgh, Tel: 44-31-225-7774 ext 242
80 South Bridge,
Edinburgh, EH1 1HN, JANet: Bundy@UK.Ac.Edinburgh
Scotland. ARPAnet: Bundy@Rutgers.Arpa
CATALOGUE OF ARTIFICIAL INTELLIGENCE TOOLS:
FORMAT FOR ENTRIES
Title:
Alias:
Abstract: <Paragraph length description of tool or technique>
Contributor: <Your name>
References: <Aim for the most helpful rather than the `classic' one>
Availability: <e.g. commercially available with documentation and support,
available as a research vehicle only with limited documentation>
Environment: <necessary supporting software/hardware>
From: <contact address for distribution, incl. telephone number and
electronic mail address if appropriate>
------------------------------
End of AIList Digest
********************
∂24-Jul-86 1402 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #172
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 24 Jul 86 13:56:17 PDT
Date: Thu 24 Jul 1986 09:55-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #172
To: AIList@SRI-STRIPE
AIList Digest Thursday, 24 Jul 1986 Volume 4 : Issue 172
Today's Topics:
Seminars - COPYCAT: Modeling Creative Analogical Thought (Ames) &
DB and KB Interface for Structural Engineering (CMU) &
Automatic Debugging for Intelligent Tutoring Systems (UTexas) &
Our Cognitive Abilities Limit the Power of AI (SRI),
Workshop - Uncertainty in Knowledge-Based Systems,
Conference - 2nd AI Applications in Engineering
----------------------------------------------------------------------
Date: Mon, 21 Jul 86 11:18:22 pdt
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Seminar - COPYCAT: Modeling Creative Analogical Thought (Ames)
National Aeronautics and Space Administration
Ames Research Center
SEMINAR ANNOUNCEMENT
Joint RCR Branch / Ames AI Forum Seminar
SPEAKER: Douglas Hofstadter
Cognitive Science
University of Michigan
TOPIC: THE COPYCAT PROJECT: MODELING CREATIVE ANALOGICAL THOUGHT
The fluidity inherent in concepts in the human mind allows different
situations to be mapped onto each other and a type of translation set up
between them. Every analogy (i.e., mapping of this sort) involves some degree
of stress, and the more stress there is, the weaker the analogy is. For an
analogy to be created, there must be mechanisms that gauge the stress of any
tentative mapping. We consider the central mechanism to be an unconscious
mental metric (i.e., a type of distance relation between concepts), which
allows the mind to quickly sense close resemblances and to accept them as
valid "equations" making up part of the translation between situations, and
which conversely makes the mind balk at far-fetched "equations" and give up
on translations that cause too much stress.
In the Copycat project, the network embodying this metric is called the
"slipnet" -- the idea being that the proximity of two nodes in the slipnet
indicates the propensity of the corresponding concepts to "slip" into each
other. Copycat's slipnet is the core of our effort at modeling "creative
slippage", which we feel is how deep and insightful analogies come about.
We have carefully tailored the domain in which the Copycat program operates,
so that it contains all the essential qualities -- but no extra qualities --
of a domain in which highly creative (as well as highly mundane) analogies
can be made.
Ultimately, however, our project is not so much about analogies per se,
but about human concepts and how they are structured so as to form something
like a slipnet. In that sense, analogies are merely an instrument for us.
Any analogy created by a human reveals some aspects of a human slipnet, which
we then attempt to transfer to our model. Conversely, the analogies created
by the Copycat program reveal the accuracy of our artificial slipnet, and thus
of our model of concepts.
In summary, the Copycat project is an attempt to study the basis for the
fluidity of the human mind by exploring the world of creative analogies within
a carefully limited domain.
DATE: Wednesday, TIME: 1:00 - 2:00 pm BLDG. 201 Main Auditorium
July 30, 1986
POINT(S) OF CONTACT: Eugene Miya PHONE NUMBER: (415) 694-6453
NET ADDRESS: eugene@ames-nas.arpa
or Alison Andrews (415) 694-6741 andrews%ear@ames-io.arpa
VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18. Do not
use the Navy Main Gate.
Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance. Submit requests to the point of
contact indicated above. Non-citizens must register at the Visitor
Reception Building. Permanent Residents are required to show Alien
Registration Card at the time of registration.
------------------------------
Date: 21 Jul 86 10:19:22 EDT
From: Craig.Howard@cive.ri.cmu.edu
Subject: Seminar - DB and KB Interface for Structural Engineering (CMU)
FINAL PUBLIC ORAL EXAMINATION
for the degree of
DOCTOR OF PHILOSOPHY
Candidate: H. Craig Howard
Title of Dissertation: Interfacing Databases and Knowledge-Based Systems
for Structural Engineering Applications
Department: Civil Engineering
Time: 1:00 pm Tuesday, July 22, 1986
Place: Adamson Wing - Baker Hall
Database management systems and expert systems will be important components
of integrated computer-aided design systems. A powerful, adaptable
interface between these components is necessary to build an integrated
structural engineering computing environment. The thesis examines the basic
issues involved in interfacing expert systems with database management
systems and describes the architecture of a prototype system, KADBASE.
KADBASE is a flexible, knowledge-based interface in which multiple expert
systems and multiple databases can communicate as independent,
self-descriptive components within an integrated, distributed engineering
computing system. The thesis presents examples from three knowledge-based
systems to demonstrate the use of KADBASE in typical engineering design
applications.
------------------------------
Date: Mon 21 Jul 86 17:27:12-CDT
From: Bill Murray <ATP.Murray@R20.UTEXAS.EDU>
Subject: Seminar - Automatic Debugging for Intelligent Tutoring Systems
(UTexas)
I will be giving the following talk on Thursday from 12 to 1 in
Taylor 3.128. All graduate students and faculty are invited. Bring
your lunch if you like.
Automatic Program Debugging for Intelligent Tutoring Systems
by
William Murray
Program debugging is an important part of the domain expertise
required for intelligent tutoring systems that teach programming
languages. This talk explores the process by which student programs can
be automatically debugged in order to increase the instructional
capabilities of these systems. The research presented provides a
methodology and implementation for the diagnosis and correction of
nontrivial recursive programs. In this approach, recursive programs are
debugged by repairing induction proofs in the Boyer-Moore Logic.
The potential of a program debugger to automatically debug widely
varying novice programs in a nontrivial domain is proportional to its
capabilities to reason about computational semantics. By increasing
these reasoning capabilities a more powerful and robust system can
result. This research supports these claims by discussing the design,
implementation, and evaluation of Talus, an automatic debugger for LISP
programs and by examining related work in automated program debugging.
Talus relies on its abilities to reason about computational semantics
to perform algorithm recognition, infer code teleology and to
automatically detect and correct nonsyntactic errors in student programs
written in a restricted, but nontrivial, subset of LISP. Solutions can
vary significantly in algorithm, functional decomposition, role of
variables, data flow, control flow, values returned by functions, LISP
primitives used, and identifiers used. Solutions can consist of
multiple functions, each containing multiple bugs. Empirical evaluation
demonstrates that Talus achieves high performance in debugging widely
varying student solutions to challenging tasks.
------------------------------
Date: Wed 23 Jul 86 12:12:25-PDT
From: Amy Lansky <LANSKY@SRI-WARBUCKS.ARPA>
Subject: Seminar - Our Cognitive Abilities Limit the Power of AI (SRI)
OUR COGNITIVE ABILITIES LIMIT THE POWER OF AI
Jack Alpert (ALPERT@SCORE)
Stanford Knowledge Integration Lab
and
School of Education, Stanford University
11:00 AM, MONDAY, July 28
SRI International, Building E, Room EK228
"Expert Systems: How far can they go?" was a panel topic at AAAI
1985. Brian Smith described the limits of AI in terms of the
programmer's ability to know if his encoded model reflected the world
that his expert system was to manage. "We have no techniques.. to
study the ... relationship between model and world. We are unable...
to assess the appropriateness of models, or to predict when models
fail."
Most of us with icy road experience are convinced we know how to
recover from skids. In the talk I will prove that our skid recovery
algorithms work only on a small set of possible skids. Skids that lie
outside of this small set result in accidents. Our "inappropriate"
skid recovery models cause accidents. 20 years of driving experience
does not revile the skid model's limitations. When we have been
building expert systems for 20 years, why should we be any better
prepared to perceive model inappropriateness?
The limited set of cognitive abilities that most people develop cannot
identify domains where models fail. I describe a temporal cognitive
ability most of us lack. Given the definition of such an ability, I
will briefly describe a line of research that explains why people
never develop the ability. Should this research be successful, we
will create new learning environments that enhance first cognitive
abilities, then modeling, and finally the power of AI systems.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
------------------------------
Date: Thu, 24 Jul 86 12:21:25 edt
From: Beth Adelson <adelson@YALE.ARPA>
Subject: Workshop - Uncertainty in Knowledge-Based Systems
Forwarded from Ron Yager:
A workshop will be held at the AAAI meeting entitled
"Dealing with Uncertainty in Knowledge-Based Systems".
An open discussion.
Date: Thursday August 14.
Time: 9 am - noon.
Place: Richter Hall, Room 2
The workshop will be a lively open discussion on issues related to the
management of uncertainty. A number of prominent workers in the field
will attend and act as focal points.
All are invited to participate.
For further information contact:
Ronald R. Yager
(212) 249-2047
------------------------------
Date: Thu, 24 Jul 86 09:30:46 -0500
From: sriram@ATHENA.MIT.EDU
Subject: Conference - 2nd AI Applications in Engineering
CALL FOR PAPERS
SECOND INTERNATIONAL CONFERENCE ON
APPLICATIONS OF ARTIFICIAL INTELLIGENCE IN
ENGINEERING
AUGUST 4TH-7TH, 1987
BOSTON, MASSACHUSETTS
INTRODUCTION
Following the success of the first international conference in
Southampton, UK, the second international conference is to be held in
Boston during the first week of August. The first international
conference stimulated many presentations on both the tools and
techniques required for the successful use of AI in engineering and
many new applications. The organizing committee members anticipate
that the second conference will be even more succesful and encourage
papers to be submitted.
OBJECTIVES
The purpose of this conference is to provide an international forum
for the presentation of work on the applications of artificial
intelligence to engineering problems. It also aims to encourage and
enhance the development of this most important area of research.
CONFERENCE THEMES
The following topics are suggested and other related areas will be
considered:
- Computer-aided design
- Planning and scheduling
- Constraint management
- Intelligent tutors
- Knowledge-based systems
- Knowledge representation
- Learning
- Natural language applications
- Cognitive modelling of engineering problems
- Database interfaces
- Graphical interfaces
- Knowledge-based simulation
- Model-based problem solving
SUBMISSION REQUIREMENTS
Authors are invited to submit a 1000 word extended abstract. This
abstract should have sufficient details, such as the type of knowledge
representation, problem solving strategies, and the implementation
language used, to permit evaluation by a committee consisting of
renowned experts in the field. The abstract should be accompanied by
the following details: author's name, address, affiliation, and the
person to whom all correspondence should be sent.
All abstracts should be submitted to Dr. R. Adey, Computational
Mechanics Inc., Suite 6200, 400 West Cummings Park, Woburn, MA 01801
(Tel. no. 617-933-7374), before November 1986. The notification of
acceptance will be sent before February 1st, 1987. Final acceptance
of papers will be based on the review of the complete paper.
Organizing Committee
General Chair Dr. R. Adey, CML Ltd.
Program Chair Dr. J. Connor, M. I. T.
Technical Chair Dr. D. Sriram, M. I. T.
Technical Program Co-ordinators
Dr. M. Tenenbaum, Fairchild Research Labs, USA
Dr. R. Milne, Intelligent Applications Ltd., UK
Dr. J. Gero, University of Sydney, Australia
Advisory Board
Leading researchers in the field
------------------------------
End of AIList Digest
********************
∂24-Jul-86 1721 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #173
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 24 Jul 86 17:01:49 PDT
Date: Thu 24 Jul 1986 10:04-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #173
To: AIList@SRI-STRIPE
AIList Digest Thursday, 24 Jul 1986 Volume 4 : Issue 173
Today's Topics:
Philosophy - Perception & Understanding,
Humor - Expert Systems Parable
----------------------------------------------------------------------
Date: Mon, 21 Jul 86 14:01:37 EDT
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: followup on "understanding yellow"
The original version of the "understanding yellow" problem may be found
in:
Jackson, Frank, "Epiphenomenal Qualia," ←Philosophical
Quarterly← 32(1982)127-136.
with replies in:
Churchland, Paul M., ``Reduction, Qualia, and the Direct
Introspection of Brain States," ←Journal of Philosophy←
82(1985)8-28.
Jackson, Frank, "What Mary Didn't Know," ←Journal of
Philosophy← 83(1986)291-95.
(One of the reasons I stopped reading net.philosophy was that its
correspondents seemed not to know about what was going on in philosophy
journals!)
William J. Rapaport
Assistant Professor
Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260
(716) 636-3193, 3180
uucp: ..!{allegra,decvax,watmath,rocksanne}!sunybcs!rapaport
csnet: rapaport@buffalo.csnet
bitnet: rapaport@sunybcs.bitnet
------------------------------
Date: Fri 18 Jul 86 14:57:17-PDT
From: Stephen Barnard <BARNARD@SRI-STRIPE.ARPA>
Subject: internal representations vs. direct perception
Eyal Mozes thinks that direct perception is right on, and that
internal representations either don't exist or aren't important.
I think direct perception is a vague and suspiciously mystical
doctrine that has no logical or physical justification.
Barnard:
>>Consider what happens when we look at a realistic
>>painting. We can, at one level, see it as a painting, or we can see
>>it as a scene with no objective existence whatsoever. How could this
>>perception possibly be interpreted as anything but an internal
>>representation?
Mozes:
>Sorry, I can't follow your argument. Of course, a realistic painting is
>a representation; but it is not an INTERNAL representation. Gibson's
>books do contain long discussions of paintings; but he specifically
>distinguishes between looking at a painting (in which case you are
>perceiving a representation of the object) and directly perceiving the
>object itself.
Barnard's reply:
Look, the painting is a representation, but we don't
perceive it AS a representation --- we perceive it as a scene. The
scene has NO OBJECTIVE EXISTENCE; therefore, we cannot perceive it
DIRECTLY. It exists only in our imaginations, presumably as internal
representations. (How else?) If the painter was skillful, the
representations in our imagination match his intention. To counter
this argument, you must tell me how one can "directly" perceive
something that doesn't exist. Good luck. On the other hand, it is
quite possible to merely represent something that doesn't exist.
Barnard:
>>Gibson emphasized the richness of the visual stimulus,
>>arguing that much more information was available from it than was
>>generally realized. But to go from this observation to the conclusion
>>that the stimulus is in all cases sufficient for perception is clearly
>>not justified.
Mozes:
>Gibson did not deny that there are SOME cases (for example, many
>situations created in laboratories) in which the stimulus is
>impoverished. His point was that these cases are the exception, rather
>than the rule. Even if we agree that in those exceptional cases there
>is some inference from background knowledge, this doesn't justify
>concluding that in the normal cases, where the stimuli do uniquely
>specify the external object, inference also goes on.
To the contrary, ambiguous visual stimuli are not rare exceptions ---
the visual stimulus is ambiguous in virtually EVERY CASE. Gibson was
fond of stereo and optic flow as modes of perception that can
disambiguate static, monocular stimuli (which are clearly ambiguous).
But he simply did not realize that such modalities are themselves
ambiguous. For example, I am not aware of Gibson discussing the
aperture problem, which describes ambiguity in optic flow. Similarly,
depth from stereo is unique once the image-to-image correspondence is
achieved, but, as we know from years of research on computational
stereo, solving the correspondence problem is not easy, primarly due
to the problem of resolving ambiguous matches. Similar problems
occur for every mode of visual perception.
Gibson's hypothesis that the information for perception exists
completely in the stimulus is false, and the entire theory of direct
perception falls apart as a consequence.
------------------------------
Date: Sun, 20 Jul 86 09:25:43 -0200
From: Eyal mozes <eyal%wisdom.bitnet@WISCVM.ARPA>
Subject: Re: Searle and understanding
>I think the example shows that there are two related meanings
>of "understanding". Certainly, in a formal, scientific sense,
>ETS knows (understands-1) as much about yellow as anyone - all
>the associated wavelengths, retinal reactions, brain-states,
>etc. He can use this concept in formal systems, manipulate it,
>etc. But *something* is missing - ETS doesn't know
>(understand-2) "what it's like to see yellow", to borrow/bend
>Nagel's phrase.
>
> It's this "what it's like to be a subject experiencing X" that
> eludes capture (I suppose) by AI systems. And I think the
> point of the Chinese room example is the same - the system as
> a whole *does* understand-1 Chinese, but doesn't understand-2
> Chinese.
No, I think you're missing Searle's point.
What you call "understanding-2" is applicable only to a very small
class of concepts - to concepts of sensory qualities, which can't be
conveyed verbally. For the concept of a color, you don't even have to
stipulate ETS; any color-blind person with a fair knowledge of physical
optics (and I happen to be such a person) has "understanding-1", but
not "understanding-2", of the concept; I know the conditions which
cause other people to see that color, I can reason about it, but I
don't know what it feels like to see it. But for concepts which don't
directly involve sensory qualities (for example, for understanding a
language) there can be only "understanding-1".
Now, Searle's point is that this "understanding-1" (such as a native
Chinese's understanding of the Chinese language, or my understanding of
colors) involves intentionality; it does not consist of manipulating
uninterpreted symbols by formal rules. That is why he denies that a
computer program can have it.
Those who think Searle sees something "magical" in human understanding
also miss his point. Quite on the contrary, he regards understanding as
a completely natural phenomenon, which, like all natural phenomena,
depends on specific material causes. To quote from his paper "Minds,
Brains and Programs": "Whatever else intentionality is, it is a
biological phenomenon, and it is as likely to be as causally dependent
on the specific biochemistry of its origins as lactation,
photosynthesis, or any other biological phenomena. No one would suppose
that we could produce milk and sugar by running a computer simulation
of the formal sequences in lactation and photosynthesis, but where the
mind is concerned many people are willing to believe in such a miracle
because of a deep and abiding dualism: the mind they suppose is a
matter of formal processes and is independent of quite specific
material causes in the way that milk and sugar are not".
Eyal Mozes
BITNET: eyal@wisdom
CSNET and ARPA: eyal%wisdom.bitnet@wiscvm.ARPA
UUCP: ..!ucbvax!eyal%wisdom.bitnet
------------------------------
Date: Sun, 20 Jul 86 17:34:03 PDT
From: kube%cogsci@BERKELEY.EDU (Paul Kube)
Subject: comment on Hayes, V4 #169
Pat Hayes <PHayes@SRI-KL> in AIList V4 #169:
>re: Searle's chinese room
>There has been by now an ENORMOUS amount of discussion of this argument, far
>more than it deserves.
Pat is right, for two reasons: the argument says nothing one way or
the other about the possibility of constructing systems which exhibit
any kind of behavior you like; and the point of the Chinese Room
argument proper--that computation is insufficient for intentionality--
had already been made to most everyone's satisfaction by Block, Fodor,
Rey, and others, by the time Searle went to press. (The question of
the sufficiency of computation plus causation, or of the sufficiency of
neurobiology, are further issues which have probably not been
discussed more than they deserve.)
>... ultimately
>whether or not he is right will have to be decided empirically, I
>believe.
Searle thinks this too, but it's not obvious what the empirical decision
would be based on. Since behavior and internal structure (by
hypothesis), and material (to avoid begging the question), are no
guide, it would seem that the only way to tell if a silicon system has
intentional states is by being one. The crucial empirical test looks
disturbingly elusive, so far as the brain-based scientific community
is concerned.
> When the robots get to be more convincing, let's
>come back and ask him again ( or send one of them to do it ).
Searle, of course, has committed himself to not being convinced by a
robot, no matter how convincing. But some elaboration of this
scenario is, I think, the right picture of how the question will be
answered (and not `empirically'): as increasingly perfected robots
proliferate, socio-political mechanisms for the establishment of
person-based rights will act in response to the set of considerations
present at the time; eventually lines will be drawn that most folks
can live with, and the practice of literal attribution of
psychological predicates will follow these lines. If this process is
(at least for practical purposes) unpredictable, then only time will
tell if Searle's paper will come to be regarded as a pathetically
primitive racist tract, or as an enlightened contribution to the
theory of the new order.
Paul Kube
kube@berkeley.edu
...ucbvax!kube
------------------------------
Date: Tue 22 Jul 86 13:30:10-PDT
From: Glenn Silverstein <SILVERSTEIN@Sushi.Stanford.EDU>
Subject: A Parable (about AI in large organizations)
Once upon a time, in a kingdom nothing like our own, gold was very
scarce, forcing jewelers to try and sell little tiny gold rings and
bracelets. Then one day a PROSPECTOR came into the capitol sporting a
large gold nugget he found in a hill to the west. As the word went out
that there was "gold in them thar hills", the king decided to take an
active management role. He appointed a "gold task force" which one year
later told the king "you must spend lots of money to find gold, lest
your enemies get richer than you."
So a "Gold Center" was formed, staffed with many spiffy looking
Ph.D. types who had recently published papers on gold (remarkably similar
to their earlier papers on silver). Experienced prospectors had been
interviewed, but they smelled and did not have a good grasp of gold
theory.
The Center bought a large number of state of the art bulldozers and
took them to a large field they had found that was both easy to drive on
and freeway accessible. After a week of sore rumps, getting dirty, and
not finding anything, they decided they could best help the gold cause
by researching better tools.
So they set up some demo sand hills in clear view of the king's
castle and stuffed them with nicely polished gold bars. Then they split
into various research projects, such as "bigger diggers", for handling
gold boulders if they found any, and "timber-gold alloys", for making
houses from the stuff when gold eventually became plentiful.
After a while the town barons complained loud enough and also got
some gold research money. The lion's share was allocated to the most
politically powerful barons, who assigned it to looking for gold in
places where it would be very convenient to find it, such as in rich
jewelers' backyards. A few bulldozers, bought from smiling bulldozer
salespeople wearing "Gold is the Future" buttons, were time shared across
the land. Searchers who, in their allotted three days per month of
bulldozer time, could just not find anything in the backyards of "gold
committed" jewelers were admonished to search harder next month.
The smart money understood that bulldozers were the best digging
tool, even though they were expensive and hard to use. Some backward
prospector types, however, persisted in panning for gold in secluded
streams. Though they did have some success, gold theorists knew that
this was due to dumb luck and the incorporation of advanced bulldozer
research ideas in later pan designs.
After many years of little success, the king decided the whole
pursuit was a waste and cut off all funding. The Center people quickly
unearthed their papers which had said so all along.
The end.
P.S. There really was gold in them thar hills. Still is.
by Robin Hanson (using silverstein@sushi)
[credit to M. Franklin for story ideas]
------------------------------
End of AIList Digest
********************
∂31-Jul-86 2159 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #174
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 31 Jul 86 21:59:38 PDT
Date: Thu 31 Jul 1986 20:00-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #174
To: AIList@SRI-STRIPE
AIList Digest Friday, 1 Aug 1986 Volume 4 : Issue 174
Today's Topics:
Seminars - Specification of Geographic Data Processing Requirements (UPenn) &
Constructing the Aspect Graph (GMR) &
RS: Distributed Sensory-based Robot Control (UMass) &
Decision-Making and Action in the Real World (SRI)
----------------------------------------------------------------------
Date: Thu, 24 Jul 86 15:05 EDT
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Specification of Geographic Data Processing
Requirements (UPenn)
Forwarded From: Glenda Kent <Glenda@upenn> on Thu 24 Jul 1986 at 14:40
FORMAL SPECIFICATION OF GEOGRAPHIC DATA PROCESSING REQUIREMENTS
Gruia-Catalin Roman
Department of Computer Science
Washington University
This presentation discusses a formal foundation for the specification of
Geographic Data Processing (GDP) requirements. The emphasis is placed on
modelling data and knowledge requirements rather than processing needs. A
subset of first order logic is proposed as the principal means for constructing
formalizations of the GDP requirements in a manner that is independent of the
data representation. Requirements executability is achieved by selecting a
subset of logic compatible with the inference mechanisms available in Prolog.
GDP significant concepts such as time, space and accuracy have been added to
the formalization without losing concepts such as time, space and accuracy have
been added to the formalization without losing Prolog implementability or
separation of concerns. Rules of reasoning about time, space and accuracy
(based on positional, temporal and fuzzy logic) may be compactly stated in a
subset of second order predicate calculus and may be easily modified to meet
the particular needs of a specific application. Multiple views of the data and
knowledge may coexist in the same formalization. The feasibility of the
approach has been established with the aid of a tentative Prolog implementation
of the formalism. The implementation also provides the means for graphical
rendering of logical information on a high resolution color display.
Acknowledgements: This work was supported in part by Defense Mapping Agency
and by Rome Air Development Center under contract F30602-83-K-0065. The full
text of this presentation is available in "Formal Specification of Geographic
Data Processing Requirements," Proceedings of the 2nd International Conference
on Data Engineering, (Outstanding Paper Award), pp. 434-446, February 1986.
------------------------------
Date: Mon, 28 Jul 86 22:18 EST
From: "Steven W. Holland" <HOLLAND%RCSMPA%gmr.com@CSNET-RELAY.ARPA>
Subject: Seminar - Constructing the Aspect Graph (GMR)
Seminar at General Motors Research Laboratories (GMR):
An Algorithm for Constructing the Aspect Graph
Dr. Charles R. Dyer
of
Computer Science Department
University of Wisconsin
Madison, WI 53706
Thursday, August 14, 1986
The aspect graph of a solid object is a representation of the visibility
of the object's surfaces throughout surrounding viewing space. In this
talk we present tight bounds on the maximum size of aspect graphs and
give worst-case optimal algorithms for their construction, first in the
convex case and then in the general case. The algorithm for the general
case makes use of a new 3-D object representation called the aspect
representation or "asp". We also suggest several alternatives to the
aspect graph which require less space and store more information.
-Steve Holland, Computer Science Department
------------------------------
Date: Sun, 27 Jul 86 16:11 EST
From: Damian Lyons <LYONS%cs.umass.edu@CSNET-RELAY.ARPA>
Subject: Seminar - RS: Distributed Sensory-based Robot Control (UMass)
Hi: I know I'm a bit late on posting this; however, I would welcome
comments from interested persons out there:
July 25th, 1986.
Dept. of Computer and Information Science.
University of Massachusetts at Amherst.
Amherst, MA.01003.
RS: A Formal Model of Distributed Computation
For Sensory-based Robot Control.
Damian M. Lyons
Robot systems are becoming more and more complex, both in terms of
available degrees of freedom and in terms of sensors. It is no longer
possible to continue to regard robots as peripheral devices of a computer
system, and to program them by adapting general-purpose programming
languages. This dissertation analyzes the inherent
computing characteristics of the robot programming domain, and formally
constructs an appropriate model of computation. The programming of a dextrous
robot hand is the example domain for the development of the model.
This model, called RS, is a model of distributed computation: the basic
mode of computation is the interaction of concurrent computing agents. A
schema in RS describes a class of computing agents. Schemas are instantiated
to produce computing agents, called SIs, which can communicate with each
other via input and output ports. A network of SIs can be grouped atomically
together in an Assemblage, and appears externally identical to a single SI.
The sensory and motor interface to RS is a set of primitive, predefined
schemas. These can be grouped arbitrarily with built-in knowledge in
assemblages to form task-specific object models. A special kind of
assemblage called a task-unit is used to structure the way robot programs
are built.
The formal semantics of RS is automata theoretic; the semantics of an
SI is a mathematical object, a Port Automaton. Communication, port
connections, and assemblage formation are among the RS concepts whose
semantics can be expressed formally and precisely. A Temporal Logic
specification and verification methodology is constructed using the automata
semantics as a model. While the automata semantics allows the analysis of
the model of computation, the Temporal Logic methodology allows the top-down
synthesis of programs in the model.
A computer implementation of the RS model has been constructed, and used
in conjunction with a graphic robot simulation, to formulate and test
dextrous hand control programs. In general RS facilitates the formulation
and verification of versatile robot programs, and is an ideal tool with
which to introduce AI constructs to the robot domain.
------------------------------
Date: Wed 30 Jul 86 17:43:10-PDT
From: Amy Lansky <LANSKY@SRI-WARBUCKS.ARPA>
Subject: Seminar - Decision-Making and Action in the Real World (SRI)
DECISION-MAKING AND ACTION IN THE REAL WORLD
John Myers (JMYERS@SRI-AI)
SRI International, Robotics Laboratory
11:00 AM, MONDAY, Aug. 4
SRI International, Building E, Room EK228
In this philosophical talk I will present my opinions as to how to
design an entity capable of operating in the real world, under limited
resources. These include limited time, information, and capabilities.
I will present models that stress heuristic aspects of behavior,
rather than traditional pre-planning techniques. As Terry Winograd has
said, "The main problem is to come up with what you are going to do in
the next five seconds."
After covering the problem and some traditional paradigms, I will
discuss three main concepts, along with a follow-up concept. These
are: the Theory of Stances, the Freudian Motivation Model, and the
Theory of Alternative Choices, along with the Principle of
Responsibility. These are contrasted against traditional approaches
by their emphasis on workability, as opposed to correctness.
A Stance consists of a high-level classification of a situation, along
with a high-level precompiled response script. Often there is
insufficient information in a prima facia situation to correctly
determine what is going on; or, the entity may simply not be able to
afford the overhead required to completely plan its behavior from
first principles. Taking a stance on the situation allows a habitual
response to be made; which at least is some action in the face of the
unknown, and at best, solves the problem with minimal effort.
The Freudian Motivation Model splits behavior generation into three
general processes: generation, policies, and judgment, corresponding
to the id, superego, and ego, respectively. Approved behaviors are
put on an intention queue or a performance queue, among others. The
model can be used to explain nonpurposeful or nonvolitional behaviors
such as posthypnotic acts or compulsions.
The Theory of Alternative Choices says that given a direct choice
between, for example, one of two actions, there are actually a number
of alternative decisions that must be considered. These include: do
nothing, wait, waffle, observe/consult, relegate, delegate, react,
transcend, or respond with a stance. One of these may be much more
appropriate in a resource-limited situation than directly planning out
a decision between the two original choices.
As a follow-up, the Principle of Responsibility says that the entity
(the computer) must be responsible for its actions and its
recommendations. In a certain sense, it must be willing to be wrong.
Even if it is totally convinced of the correctness of its situational
assessment, it must consider the possibility that things might go
badly, given a certain course of action--and it must use that as
further input to the decision process.
Examples will be interspersed in the talk.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
------------------------------
End of AIList Digest
********************
∂01-Aug-86 0034 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #175
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 1 Aug 86 00:34:28 PDT
Date: Thu 31 Jul 1986 22:25-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #175
To: AIList@SRI-STRIPE
AIList Digest Friday, 1 Aug 1986 Volume 4 : Issue 175
Today's Topics:
Administrivia - Digest Schedule,
Discussion Lists - Natural Language and Knowledge Representation &
PSYCHNET Address Correction,
Philosophy - Translations & Philosophy Journal Style &
Searle and Understanding & McLuhan's Sports Analogy &
Conservation of Information
----------------------------------------------------------------------
Date: Thu 31 Jul 86 17:13:19-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Digest Schedule
The digest has been a little delayed this week because I've been
ill. I'm getting back on my feet now, but starting August 13
I'll be doing some traveling for a month. UUCP net.ai will still
function, but AIList digests will be halted until mid September.
The following announcement of Brad Miller's NL-KR list may provide
interim relief for the network junkies among us. I appreciate his
willingness to take over AIList topics that deserve their own forum.
-- Ken Laws
------------------------------
Date: Mon, 28 Jul 86 16:10 EDT
From: Brad Miller <miller@UR-ACORN.ARPA>
Subject: New List formed on Natural Language and Knowledge Representation
As most of you know, Ken Laws has been getting swamped with AIList
duties, and has asked for help. In this vein, I am starting a separate
list to deal exclusively with the Natural Language and Knowledge
Representation subfields of AI.
Since the scope of this list will be much narrower than the AIList, I
welcome postings from disciplines throughout cognitive science that are
related to these areas. I feel that AI is more of a conglomeration of
several diverse fields than it is a field unto itself, so this sort of
diversity is necessary.
More specifically, here are some details:
You may submit material for the digest to nl-kr@rochester.arpa .
Digests are sent to Arpanet readers and USENET readers as appropriate.
(There are no current plans for forwarding to the UUCP news system.)
Administrative requests (including asking to be included on the
list) should be sent to nl-kr-request@rochester.arpa . Archival copies
of all digests will be kept; feel free to ask nl-kr-request for recent
back issues.
NL-KR is open to discussion of any topic related to the natural
language (both understanding and generation) and knowledge
representation, both as subfields of AI. My own related interests are
primarily in
Knowledge Representation Natural Language Understanding
Discourse Understanding Philosophy of Language
Plan Recognition Computational Linguistics
Contributions are also welcome on topics such as
Cognitive Psychology (as related to NL/KR)
Human Perception (same)
Linguistics
Machine Translation
Computer and Information Science (as may be used to implement
various NL systems)
Logic Programming (same)
Contributions may be anything from tutorials to speculation. In
particular, the following are sought:
Abstracts Reviews
Lab Descriptions Research Overviews
Work Planned or in Progress Half-Baked Ideas
Conference Announcements Conference Reports
Bibliographies History of NL/KR
Puzzles and Unsolved Problems Anecdotes, Jokes, and Poems
Queries and Requests Address Changes (Bindings)
This list is in some sense a spin-off of the AIList, and as such, a
certain amount of overlap is expected. The primary concentration of this
list should be NL and KR, that is, natural language (be it
understanding, generation, recognition, parsing, semantics, pragmatics,
etc.) and how we should represent knowledge (aquisition, access,
completeness, etc. are all valid issues). Topics I deem to be outside
the general scope of this list will be forwarded to AIList (or other
more appropriate list) or rejected.
Bradford Miller
University of Rochester
Computer Science Department
miller@rochester.arpa
[Note: Grateful acknowledgement is given to Dr. Kenneth Laws of SRI for
permission to use an edited version of his AIList welcoming message.]
------------------------------
Date: Sat, 26 Jul 86 14:35:16 CDT
From: Psychnet <EPSYNET%UHUPVM1.BITNET@WISCVM.ARPA>
Reply-to: EPSYNET%UHUPVM1.BITNET@WISCVM.ARPA
Subject: PSYCHNET Correction
To contact psychnet the userid is
EPSYNET
and not
EPSYCHNET.
Yours truly,
Bob Morecock, Editor
------------------------------
Date: Fri, 25 Jul 86 17:19:20 bst
From: Gordon Joly <gcj%qmc-ori.uucp@Cs.Ucl.AC.UK>
Subject: "Metamorphosis" -- (A Common Sense Chinese Room Analogy).
The Steven Berkoff production of "Metamorphosis" has recently
returned to the London stage. A reviewer has pointed out that
the play lacks many of the levels of meaning in Kafka's work,
as a result of its transformation into a theatrical work. The
reviewer was probably thinking of the English translation of
the text from the original German and it has been pointed out
that the translation the original language was responsible for
a considerable loss of substance. Apparently, the true impact
of the work can only be grasped by a native speaker, who has
a background of the German culture.
Gordon Joly.
INET: gcj%maths.qmc.ac.uk%cs.qmc.ac.uk@cs.ucl.ac.uk
EARN: gcj%UK.AC.QMC.MATHS%UK.AC.QMC.CS@AC.UK
UUCP: ...!seismo!ukc!qmc-ori!gcj
also: joly%surrey.ac.uk@cs.ucl.ac.uk
[In Contact, Carl Sagan quotes a line about reading a translation
being similar to viewing a tapestry from the back. -- KIL]
------------------------------
Date: Mon, 28 Jul 86 14:27:03 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: philosophy journals
In article <8607211801.AA17444@ellie.SUNYAB>, rapaport@buffalo.CSNET
("William J. Rapaport") writes:
> The original version of the ... problem may be found in:
> Jackson, "Epiphenomenal Qualia," ←Philosophical Q.← 32(1982)127-136.
> with replies in:
> Churchland, "Reduction, Qualia, and the Direct Introspection of
> Brain States," ←J. of Philosophy← 82(1985)8-28.
> Jackson, "What Mary Didn't Know," ←J. of Philosophy← 83(1986)291-95.
> (One of the reasons I stopped reading net.philosophy was that its
> correspondents seemed not to know about what was going on in philosophy
> journals!)
Out of curiosity I hunted up the third article on the way back from lunch.
It's aggressive and condescending; any sympathy I might have felt for
the author's argument was repulsed by his sophomoric writing. I hope it's
not typical of the writing in philosophy journals.
------------------------------
Date: 25 Jul 86 10:21 PDT
From: Newman.pasa@Xerox.COM
Subject: Re: Searle and understanding
Eyal Mozes quotes from Searle to explain how Searle thinks about human
understanding and its biological nature. I had seen that passage of
Searle's before, and I think that this is a major part of my problem
with Searle. He accepts the biological nature of thought and mind, yet
cannot accept the proposition that a computer can reproduce the
necessary features of these items. I cannot see any reason to believe
that Searle's position is correct. More importantly, I can see many
reasons why his position is incorrect.
Searle uses milk and sugar to illustrate his point. I think that this is
a terrible comparison because milk and sugar are physical products of
biological processes while thought and mind are not. I also think that
Searle's attack on grounds of dualism is rather unfair. Even Searle must
agree that there are physical things and non-physical things in the
world (eg Volkswagens and numbers), and that milk and sugar are members
of the first class while thought and mind are members of the second.
Moreover, Searle's position apparently demands that there are features
of thought and mind that are dependent on features of very low-level
biological processes that make thought and mind happen. What evidence is
there that there are such features? I don't see that features of the
neurotransmitters (for example) can have an effect at any level other
than their own, particularly since any one biochemical event is unlikely
to have a large effect (my understanding is that large numbers of
biochemical events must occur in concert for anything to be apparent at
higher levels).
Admittedly there is as little evidence for my position as there is for
Searle's, but I think that there is more evidence against Searle than
there is against me. One last point is my paraphrase of John Haugeland's
comment in "Artificial Intelligence - The Very Idea": that brains are
merely symbol processors is a hypothesis and nothing more - until more
solid proof comes along.
>>Dave
------------------------------
Date: Wed, 23 Jul 86 14:08:41 edt
From: cdx39!jc%rclex.UUCP@harvard.HARVARD.EDU
Subject: Re: Re: Computer Ethics (from Risks Digest)
> To quote one of McLuhan's defocussed analogies: "You must talk to the
> medium, not to the programmer. To talk to the programmer is like
> complaining to the hot-dog vendor about how badly your team is playing."
Whether he was talking about the broadcast or the computer industry, he
got the analogy wrong.
If the subject is broadcasting, the sports analogy to a "programmer"
is the guy that makes the play schedules. True, that person is not
responsible for program content, much less quality. But still, the
analogous position is not the hot-dog vendor.
If the subject is computers, the sports equivalent to a programmer
is the guy that designs the plays, i.e., the coach. He is indeed
responsible for how badly the team/computer plays. True, there may
be others that share the responsibility (like the players and equipment
vendor and the cpu and the I/O devices). But still, in computing,
a programmer bears at least partial responsibility for the computer's
(mis)behaviour.
------------------------------
Date: Thu, 24 Jul 86 03:30:03 PDT
From: larry@Jpl-VLSI.ARPA
Subject: "Proper" Study of Science, Conservation of Info
I have to start with materialism. What we mean today when we say the word
may have a common core with its use in previous centuries, but the details
are vastly different. Today we recognize not only wind and wave, steam and
steel as physical realities, but also quanta and field effects (and virtual
particles!)--subjects that pre-modern physicists and engineers would
consider downright mystical. And that would have been exactly true--in
their time. But we can precisely define these things now, quantify them,
experiment with and measure them. An even more radical difference is that
information--pattern, form--is now a part of physics, "a metric as important
as time, space, charge, etc."
The ability to quantify and measure pattern and shape has profound implica-
tions for the study of formerly mystical topics such as intelligence. It
means we can develop conservation laws for information, without which you
can't construct an essential ingredient of mathematics, equations. I'm not
implying I know what they are in any detail; people with other qualifica-
tions than mine must provide that. But the shape of the research seems to
be clear; cybernetics and information theory provide the basis.
For instance, there are several links between information and energy.
Higher frequency radiation has more bits per unit time. Mutation is the
result of external energy pushing genes beyond the ability of their binding
energies to maintain a stable structure. The impressing of information on
media (diskettes, molecules, brains) requires energy which can be measured.
Organization of information in structures (indexed or random files,
percepts, concepts) has time/energy trade-offs for different kinds of
accesses.
In a way, the information content of an entity is more important than its
material content. A decade from now it's likely that none of our bodies
will contain EVEN A SINGLE ATOM now in them. Even bones are fluid in
biological organisms; only when we die does matter cease to flow into and
out of us. We are NOT matter, or even energy, in the Antique sense. We are
patterns, standing waves in four (or more) dimensions.
Maintaining these patterns within safe parameters, or learning new safe
parameters, requires that our very molecules input data, store it, process
it--often in a recursive or self-referential or time-dependent fashion--and
act. (RNA is an excellent model for an advanced computer, for instance.)
And we can be thought as a number of layers each with its unique informa-
tion needs: cells, tissue, organs, organisms, tribes.
One feature common to all intelligences, however rudimentary, is the ability
to create and manipulate analogs of the environment and of themselves.
Simulations are much cheaper and safer than experiments. This also gives a
clue as to how will impresses itself on the universe despite its immaterial
nature--because it isn't truly immaterial. Patterns are no more independent
of their matter/energy base than matter can exist without pattern. (That
is, the pattern of binding is what makes the difference between an atom and
a burst of radiant energy.) Because intelligence is a pattern of energy it
can affect matter and through triggering have effects enormously greater
than the triggering stimulus. A whim and a whistle can destroy a city--with
an avalanche.
The point of all this is that life and intelligence are no longer
supernatural--beyond the reach of formalism and experiment.
What is still a mystery to me is consciousness, but the understanding
doesn't seem beyond practical realization. It seems reasonable that con-
sciousness arises as a result of time-binding, recursion, and self-
reference. Perhaps multiple layers of vulnerability and adaptability are
important, too. (Our current robots and computers don't have any of these
and are thus poor candidates for models of intelligent mechanisms, much less
conscious ones. Thus I'd agree with one recent critic of some AI research.)
I can't agree that consciousness is an improper subject for scientific
study. Our inability to observe it directly (in a public as opposed to
subjective way) is shared by many other scientific fields. In fact the most
crucial subjects in the "hard" sciences must be studied indirectly: radia-
tion, atoms, viruses, etc. The difficulty of defining terms shouldn't be a
deterrent either. All developing research shares the same problem as the
underlying ideas change and solidify.
Some people object on emotional grounds. Many of them only succeed in
revealing their own limitations, not those of the rest of us. They are too
emotionally stunted to have the strength of humility; they must somehow be
above nature, superior. And too intellectually crippled to see the magic
and mystery in star-shine and bird flight, in ogive curve and infinitesimals
and the delicious simplicity of an algorithm.
Larry @ jpl-vlsi.arpa
------------------------------
End of AIList Digest
********************
∂04-Aug-86 0059 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #176
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 4 Aug 86 00:59:18 PDT
Date: Sun 3 Aug 1986 23:14-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #176
To: AIList@SRI-STRIPE
AIList Digest Monday, 4 Aug 1986 Volume 4 : Issue 176
Today's Topics:
Queries - Expert System to Catch Spies & Reimplementing in C &
Machine Translation & Financial Expert Systems &
Connectionist Approaches To Expert System Learning &
Snodgrass and Vanderwart Images & Forgy VAX/VMS OPS5 User Manual &
AI System Development Model,
AI Tools - VM Common Lisp & VM Prolog,
Expert Systems - Geometric Placement,
Patent - Hierarchical Knowledge System
----------------------------------------------------------------------
Date: Wed 23 Jul 86 21:39:50-CDT
From: CS.VANSICKLE@R20.UTEXAS.EDU
Subject: Expert system to catch spies
Today's (July 23, 1986) Wall Street Journal contains an
editorial by Paul M. Rosa urging the use of expert systems
to identify potential spies (acutally traitors). Mr. Rosa
is a lawyer and a former intelligence analyst. Since
virtually all American traitors sell out for money, an
expert system embodying the expertise of trained
investigators could examine credit histories, court files,
registers of titled assets such as real estate and
vehicles, airline reservations, telephone records, income
tax returns, bank transactions, use of passports, and
issuance of visas. The system would look for suspicious
patterns and alert counter-intelligence officials for
further investigation.
There are some obvious considerations of privacy and
legality, but that is probably best discussed on another
bulletin board. Mr. Rosa says the system would be used
only on the 4.3 million people who hold security
clearances, who have consented to government scrutiny.
According to Mr. Rosa, "the obstacles to implementation are
not technological," and "the system could be implemented
quickly and cheaply." He predicts that the Soviets,
working through their extensive international banking
network, will use the same techniques to identify potential
recruits. He also says that the FBI has three expert
systems for monitoring labor rackets, narcotics shipments,
and terrorist activities.
Any reactions? Is this doable? It strikes me as more of a
data collection problem than an expert system problem. Is
there anyone who knows more about the FBI expert systems
and can talk about it?
Larry Van Sickle
cs.vansickle@r20.utexas.edu
Computer Sciences Dept.
U of Texas at Austin
Austin, TX 78712
------------------------------
Date: Sun, 27 Jul 86 20:20:06 cdt
From: marick%ccvaxa@gswd-vms.ARPA (Brian Marick)
Subject: Reimplementing in C
I've been hearing and seeing something for the past couple years,
something that seems to be becoming a folk theorem. The theorem goes
like this:
Many expert systems are being reimplemented in C.
If even the expert system companies are abandoning
"special-purpose AI languages" like Lisp and Prolog, surely nobody
else - other than academics and semi-academics - will use them.
I'm curious what the facts are. Which companies are reimplementing in
C (or other languages). Why? And what (roughly) does "reimplementing
in C" mean? What languages are used for development of new products?
What will happen in the future? Which companies are not reimplementing?
Why not?
(I'm concentrating on these particular companies because they're what the
"theorizers" concentrate on. Comments from others welcome.)
Brian Marick, Wombat Consort
Gould Computer Systems -- Urbana && University of Illinois
...ihnp4!uiucdcs!ccvaxa!marick
ARPA: Marick@GSWD-VMS
------------------------------
Date: Mon, 28 Jul 86 10:03:44 edt
From: Catherine A. Meadows <meadows@nrl-css.arpa>
Subject: machine translation
I am interested in learning about machine translation of natural languages.
Can anybody out there tell me what is going on in the field these days,
how much progess has been made, what systems are being built, who is working
on them, etc.?
Catherine Meadows
(send replies to meadows@nrl-css)
------------------------------
Date: Thu 31 Jul 86 06:29:18-PDT
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: Financial Expert Systems
I'd like to make a collection of references to work being done in AI
and finance, including trading, planning, market analysis, etc. I've
found the companies who are developing such systems internally to be
very secretive (not suprisingly), but I'd like to throw some light on
this area.
If anyone is doing work in these kinds of domains and would like to
talk about it, please send them on to me and I'll redistribute the
answers after digesting them. I see some especially interesting
problems in dealing with time and pattern recognition that occur
in these situations.
--ted
------------------------------
Date: 29 Jul 86 19:18:10 GMT
From: ucbcad!nike!lll-crg!seismo!mcvax!ukc!reading!brueer!ckennedy@ucb
vax.berkeley.edu (C.M.Kennedy)
Subject: Connectionist Approaches To Expert System Learning
CONNECTIONIST APPROACHES TO EXPERT SYSTEM LEARNING
I wish to hear about any research on the following topic:
The application of connectionist models, in particular
feature discovery networks (e.g. kohonen nets) to the
problem of knowledge induction in expert systems.
Applications of connectionist models to other areas of symbolic
processing or knowledge representation are also of interest.
I would be pleased to receive (via mail) the following information:
1. A summary of what the research is attempting to achieve,
methods used and degree of success,
2. how to obtain more detailed documentation (e.g. technical
reports),
3. references on literature used for the research or which may
be of future interest.
I would also be interested to hear of anyone else with similar interests
who can contribute useful ideas or knows of any specific literature on
the subject.
Catriona Kennedy
Brunel University
------------------------------
Date: 2 Aug 86 02:13:54 GMT
From: watcgl!fdfishman@ucbvax.berkeley.edu (Flynn D. Fishman)
Subject: Snodgrass and Vanderwart Images
I am not really sure where to post this request but I will give this a shot
and hope some one can help me.
I am looking for the digitized set of 260 commonly found objects
compiled by Snodgrass and Vanderwart for use in psychology.
Snodgrass, J. G. & Vanderwart, M. (1980). A standardized set of 260 pictures:
Norms for name agreement, image agreement, familiartity, and visual
complexity. Journal of Experimental Psychology: Human Learning and Memory,
6, 174-215..
Any format will do, but I would perfer if they were in a line format, i.e.
co-ordinates.
I would also appreciate if you could e-mail me a response as I do not get to
read as much news as I would like to.
Thanks very much.
--
FDFISHMAN (Flynn D. Fishman)
UUCP : ...!{decvax|ihnp4|clyde|allegra|utzoo}!watmath!watcgl!fdfishman
ARPA : fdfishman%watcgl%waterloo.csnet@csnet-relay.arpa
CSNET : fdfishman%watcgl@waterloo.csnet
------------------------------
Date: Sun, 3 Aug 86 15:11 EST
From: SECRIST%OAK.SAINET.MFENET@LLL-MFE.ARPA
Subject: Forgy VAX/VMS OPS5 User Manual
From: <SECRIST%OAK.SAINET.MFENET@LLL-MFE.Arpa> (Richard C. Secrist)
Date: Sun, 3-AUG-1986 15:12 EST
To: AIlist@SRI-STRIPE.ARPA
Message-ID: <[OAK.SAINET.MFENET].701C0320.008F2E4C.SECRIST>
Header-Disclaimer: I don't like my headers either.
Quote: "May your future be limited only by your dreams." -- Christa McAuliffe
Organization: Science Applications Int'l. Corp., Oak Ridge, Tenn., USA
CompuServe-ID: [71636,52]
X-VMS-Mail-To: ARPA%"AIlist@SRI-STRIPE.Arpa"
I have a copy of Forgy's 1981 OPS5 system in Lisp for use under the
public domain Franz Lisp for VMS and am trying to locate a user's
manual for it, and would appreciate any help the members of this list
could provide.
I believe the document is:
Forgy, C.L. OPS5 User's Manual. Carnegie-Mellon Univ.,
CMU-CS-78-116, 1981.
Thanks in advance to all.
Richard C. Secrist,
SECRIST%OAK.SAInet.MFEnet@LLL-MFE.Arpa
Science Applications Int'l. Corp.; 800 Oak Ridge Tpke; Oak Ridge, TN 37830
------------------------------
Date: 29 Jul 86 20:21:53 GMT
From: decvax!savax!king@ucbvax.berkeley.edu (king)
Subject: AI System Development Model
Sanders Associates, Inc., under contract to Rome Air Development Center, is
performing a study on the acquisition, management, and control of Artificial
Intelligence (AI) software. While the Department of Defense has established
numerous standards for the acquisition and development of conventional
software, such standards may not translate effectively to AI software.
The development of a model suitable for dealing with issues related to
acquisition, control and management of AI based software requires input from
experienced AI development team members. Sanders has developed a questionnaire
that explores the development process in these areas. Contributions to the
questionnaire and study will be acknowledged in the final report. Interested
professionals are invited to contact the following for a copy of the
questionnaire:
Ms. Sandy King
Sanders Associates, Inc. (MER24-1283)
Nashua, N.H. 03061
(603) 885-9242
uucp: !decvax!savax!king
------------------------------
Date: Wed, 23 Jul 86 11:10:18 pdt
From: George Cross <cross%wsu.csnet@CSNET-RELAY.ARPA>
Subject: Re: Common Lisp and Prolog on VM/CMS
Intermetrics is selling VM/CMS Common Lisp. The educational price was
recently $4000. The documentation indicates a quite complete implementation
with interfaces to Intermetrics C language available. There is an ad
for it on p32 of AI Magazine, V7, Number 1, Spring 1986.
Intermetrics
733 Concord Avenue
Cambridge, MA 02138
(617)-661-1840
Cognitive Systems may be selling CSI-LISP on top of IBM VM LISP. This
is a rewrite of T, a Scheme dialect. Details in AI Magazine, V6, Number
3, Fall 1985, page 248.
---- George
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
George R. Cross cross@wsu.CSNET
Computer Science Department cross%wsu@csnet-relay.ARPA
Washington State University faccross@wsuvm1.BITNET
Pullman, WA 99164-1210 Phone: 509-335-6319 or 509-335-6636
------------------------------
Date: Sat, 2 Aug 86 07:56:55 PDT
From: newton@vlsi.caltech.edu (Mike Newton)
Subject: VM Prolog
Regarding the recent inquiry about Common Lisp & Prolog under VM:
Though we run VM, one of the Virtual machines is UTS -- Amdahl's port
of System 5. Under this we run a locally modified version of CProlog
and are quite pleased with the performance. Warning -- UTS is *NOT*
cheap (but is very nice) !!
Our own Prolog compiler (for VM/CMS) is just nearing completion -- It can
compile roughly half of itself. However we do not expect that it will
be ready for release for a while. It follows Clocksin & Mellish as well
as can be done on an IBM mainframe (EBCDIC-->ASCII conversions and
such). When released it will be *fast* -- roughly 95 KLips on a 4341,
and currently around .8 MegaLips on a 3090 (using one processor!).
I believe the IBM prolog (Waterloo) uses a different syntax than is
commonly used.
Hope this has helped --
- mike
newton@cit-vax.caltech.edu {ucbvax!cithep,amdahl}!cit-vax!newton
Caltech 256-80 818-356-6771 (afternoons,nights)
Pasadena CA 91125 Beach Bums Anonymous, Pasadena President
------------------------------
Date: Fri, 25 Jul 86 14:27:33 PDT
From: trwrb!orion!gries@ucbvax.Berkeley.EDU (Harry A. Gries)
Reply-to: orion!gries@ucbvax.Berkeley.EDU (Harry A. Gries)
Subject: Re: Query - Geometric Placement
In article <522182201.bhola@spice.cs.cmu.edu>
Carlos.Bhola@SPICE.CS.CMU.EDU writes:
>
> Query: Does anyone know about any expert system (developed
> or under development) that relates to the placement
> of geometric objects in a plane? Examples of the
> problem would be pagination, VLSI layout, etc.
>
>
> -- Carlos.
Another application would be in creating district boundaries for
congressional representatives. The problem would be to section
the population of a state (currently California is debating this problem)
so that each district has approximately the same population. This must
be done without breaking city, county, or precinct boundaries. Also,
in order to assure a fairly homogeneous constituency, the aspect ratio
of the district must be limited. An optimal solution would have the
smallest sum of district perimeters.
-- BTK
------------------------------
Date: 1 Aug 86 15:36 PDT
From: Shrager.pa@Xerox.COM
Subject: Note without comment
United States Patent # 4,591,983
Date: May 27, 1986
Title: Hierarchical Knowledge System
Filed: July 9, 1984
Abstract:
A knowledge system has a hierarchical knowledge base comprising a
functional decomposition of a set of elements into subject sets over a
plurality of hierarchical levels. [...] the operations include matching,
configuring, and expanding the user-defined set of elements [...] In a
specific embodyment, the elements are available components of a system
or item of manufacture [...].
Perpetrators: James S. Bennett & Jay S. Lark
Techknowledge, Inc.
Palo Alto, CA
------------------------------
End of AIList Digest
********************
∂09-Aug-86 0237 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #177
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 9 Aug 86 02:37:06 PDT
Date: Fri 8 Aug 1986 22:52-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #177
To: AIList@SRI-STRIPE
AIList Digest Saturday, 9 Aug 1986 Volume 4 : Issue 177
Today's Topics:
Seminars - Helicopter Flight Path Control (Ames) &
Object Encapsulation and Inheritance (MIT) &
ACTORS in Concurrent Logic Programming Languages (MIT),
Conference - 4th International Conference on Logic Programming &
2nd Int. Rewriting Techniques and Applications &
1st Eurographics Intelligent CAD Systems
----------------------------------------------------------------------
Date: 8 Aug 1986 1121-PDT (Friday)
From: eugene@AMES-NAS.ARPA (Eugene Miya)
Subject: Seminar - Helicopter Flight Path Control (Ames)
National Aeronautics and Space Administration
Ames Research Center
Systems Autonomy Demonstration Program Seminar
Dr. Shoshana Abel
Expert-EASE Systems
Application of Evidential Reasoning to Helicopter
Flight Path Control
An innovative form of AI technology called evidential reasoning systems
will be presented for advanced helicopters. The reasoning system,
based on the mathematical theory of evidence by Glen Shafer, centers
on automatic reasoning inorder to derive the necessary conclusions
about feature extraction and obstacle avoidance. The advantage of
using this approach applied to advance helicopters will be
discussed.
Date: Thursday. 8/21/86
Time: 3:00 pm
Location: NASA, Ames Research Center, Bldg. 244 room 103
Inquires: David Jared, (415) 964-6533 jared%plu@ames-io.arpa
VISITORS ARE WELCOME: Register and obtain vehicle pass at Ames Visitor
Reception Building (N-253) or the Security Station near Gate 18. Do not
use the Navy Main Gate.
Non-citizens (except Permanent Residents) must have prior approval from the
Director's Office one week in advance. Submit requests to the point of
contact indicated above. Non-citizens must register at the Visitor
Reception Building. Permanent Residents are required to show Alien
Registration Card at the time of registration.
------------------------------
Date: 6 Aug 1986 1400-EDT
From: ALR@XX.LCS.MIT.EDU
Subject: Seminar - Object Encapsulation and Inheritance (MIT)
DATE: THURSDAY, AUGUST 7, 1986
REFRESHMENTS AT 1:45 PM
TALK AT 2:00 PM
PLACE: NE43-512A
Encapsulation and Inheritance in Object-Oriented Programming Languages
Alan Snyder
Hewlett Packard
Palo Alto, Ca.
Object-oriented programming is a practical and useful programming methodology
that encourages modular design and software reuse. Most object-oriented
programming languages support data abstraction by preventing an object from
being manipulated except via its defined external operations. In most
languages, however, the introduction of inheritance severely compromises the
benefits of this encapsulation. Furthermore, the use of inheritance itself is
globally visible in most languages, so that changes to the inheritance
hierarchy cannot be made safely. We examine the relationship between
inheritance and encapsulation and develop requirements for full support of
encapsulation with inheritance.
Host: Prof. Liskov
------------------------------
Date: Wed, 6 Aug 1986 14:38 EDT
From: PJ@OZ.AI.MIT.EDU
Subject: Seminar - ACTORS in Concurrent Logic Programming Languages (MIT)
****** SEMINAR ******
THURSDAY, AUGUST 7
8TH FLOOR PLAYROOM
11:00 am
******** ACTORS *******
IN CONCURRENT LOGIC PROGRAMMING LANGUAGES
*****************************
KENNETH KAHN
Knowledge Systems Area
Intelligent System Laboratory
XEROX PALO ALTO RESEARCH CENTER
ABSTRACT:
Concurrent logic programming languages support object-oriented
programming with a clean semantics and additional programming constructs
such as incomplete messages, unification, direct broadcasting, and
concurrency synchronization. While these languages
provide excellent computational support, we claim they do not provide
good notation for expressing the abstractions of object-oriented
programming. We describe a preprocessor that remedies this problem.
the resulting language, Vulcan, is then used as a vehicle for exploring
new variants of object-oriented programming which become possible in
this framework.
Host: Prof. Carl Hewitt
------------------------------
Date: 1 August 1986, 23:14:13 EDT
From: Jean-Louis Lassez <JLL@ibm.com>
Subject: Conference - 4th International Conference on Logic Programming
CALL FOR PAPERS
Fourth International Conference On Logic Programming
University of Melbourne, Australia
Late May 1987
The conference will consider all aspects of logic
programming, including, but not limited to:
Theory and Foundations
Architectures and Implementations
Programming Languages and Methodology
Databases
Knowledge Representation, Reasoning and Expert Systems
Relations to other computation models, programming
languages, and programming methodologies.
Of special interest are papers discussing novel applications
and applications that address the unique character of logic
programming.
Papers can be submitted under two categories, short - up to
2000 words, and long - up to 6000 words. Submissions will
be considered on basis of appropriateness, clarity,
originality, significance, and overall quality.
Authors should send six copies of their manuscript, plus an
extra copy of the abstract to:
Jean-Louis Lassez
ICLP Program Chairman
IBM T.J. Watson Research Center
H1-A12
P.O. Box 218
Yorktown Heights, NY 10598
USA
Deadline for submission of papers is December 1, 1986.
Authors will be notified of acceptance or rejection by
February 28, 1987. Camera ready copies are due April 1st,
1987.
General Chairman:
John Lloyd
Department of Computer Science
University of Melbourne
Parkville, Victoria 3052
Australia
Program Committee
Ken Bowen, Syracuse, USA
Keith Clark, Imperial College, U.K.
Jacques Cohen, Brandeis, USA
Veronica Dahl, Simon Fraser University, Canada
Maarten van Emden, University of Waterloo, Canada
Koichi Furukawa, ICOT, Japan
Ivan Futo, SZKI, Hungary
Seif Haridi, SICS, Sweden
Jean-Louis Lassez, Yorktown Heights, USA
Giorgio Levi, University of Pisa, Italy
Jacob Levy, Weizmann Institute, Israel
John Lloyd, University of Melbourne, Australia
Fumio Mizoguchi, Science University of Tokyo, Japan
Fernando Pereira, SRI International, USA
Antonio Porto, University of Lisbon, Portugal
Marek Sergot, Imperial College, U.K.
David Warren, Manchester University, U.K.
------------------------------
Date: Wed, 16 Jul 86 21:10:00 -0200
From: mcvax!crin!lescanne@seismo.CSS.GOV (Pierre LESCANNE)
Subject: Conference - 2nd Int. Rewriting Techniques and Applications
[Forwarded from TheoryNet by Laws@SRI-STRIPE.]
CALL FOR PAPERS
RTA-87
2nd INTERNATIONAL CONFERENCE
on
REWRITING TECHNIQUES AND APPLICATIONS
May 25-27 1987 Bordeaux, France
TOPICS
In May 1985 the First International Conference on Rewriting Techniques
and Applications met at Dijon. The conference was a great success, attracting
over 100 researchers working on rewriting techniques. The second conference
will take place at Bordeaux, another city famous for its wine, in May 1987.
Papers concerning the theory and applications of term rewriting are solicited
for the conference. Areas of interest include the following, but authors are
encouraged to submit papers on other topics as well.
Equational Deduction Functional and Logic Programming
Computer Algebra Automated Theorem Proving
Unification and Matching Algorithms Rewrite Rule Based Expert Systems
Algebraic and Operational Semantics Semantics of Nondeterminism
Theory of general rewriting systems Rewriting and Computer Architecture
Specification, Transformation, Validation and Generation of Programs
SUBMISSION
Each submission should include 11 copies of a one page abstract and 4
copies of a full paper of no more than 15 double spaced pages. Submissions are
to be sent to one of the Co-Chairmen:
For Europe: Pierre Lescanne, RTA-87, Centre de Recherche en
Informatique de Nancy, Campus Scientifique, BP 239,
54506 Vandoeuvre-les-Nancy Cedex, FRANCE.
For other countries: David Plaisted, RTA-87,
Department of Computer Science,
New West Hall 035-A,
University of North
Carolina at Chapel Hill,
Chapel Hill NC 27514, USA.
Paper selection will be done by circulating abstracts to all members of the
program committee, with each full paper assigned to several committee members
having appropriate expertise. In addition to selected papers, a few invited
lectures will be given by well-known researchers who have made major
contributions in the field:
INVITED LECTURERS
J-P. Jouannaud, University of Paris-Sud, France,
D. Musser, GE Research and Development Laboratory, Schenectady, USA,
M. O'Donnell, University of Chicago, Illinois, USA.
SCHEDULE
Paper submission deadline is December 15, 1986.
Acceptance/Rejection by January 25, 1987.
Camera ready copy by March 9.
Proceedings will be distributed at the conference and published by Springer
Verlag in the LNCS series.
PROGRAM COMMITTEE
B. Buchberger, University of Linz, Austria,
R. Book, University of Santa Barbara, USA,
B. Courcelle, University of Bordeaux, France,
N. Dershowitz, University of Illinois, USA,
J. Guttag, MIT, USA,
D. Kapur, General Electric, USA,
P. Lescanne, (Program co-Chairman) CRIN, France,
R. Loos, University of Karlsruhe, FRG,
D. Plaisted, (Program co-Chairman), University of North Carolina, USA
G. Plotkin, University of Edinburgh, UK,
M. Stickel, SRI-International, USA.
LOCAL COMMITTEE
B. Courcelle, R. Cori, M. Claverie
For information send mail on UUCP to: mcvax!inria!crin!lescanne
or on ARPAnet to: pierre@larch.
------------------------------
Date: Mon, 04 Aug 86 23:47:37 -0500
From: sriram@ATHENA.MIT.EDU
Subject: Conference - 1st Eurographics Intelligent CAD Systems
CALL FOR PAPERS
FIRST EUROGRAPHICS WORKSHOP ON
INTELLIGENT CAD SYSTEMS
APRIL 22-24, 1987
NOORDWIJKERHOUT, THE NETHERLANDS
ORGANIZED BY
CENTRE FOR MATHEMATICS AND COMPUTER SCIENCE
AMSTERDAM, THE NETHERLANDS
SPONSORED BY
EUROGRAPHICS
AIM
Today, one of the main strengths in CAD research has become so-called
intellectualization of CAD systems, primarily, as an application of
knowledge engineering. This research may contain two aspects:
intellectualization of CAD systems by intelligence for helping
designers and intellectualization by problem solving ability. The
former approach may be achieved by developing intelligent user
interface concept, for instance, so that the designer can perform
his/her full ability, whereas the latter may be achieved by developing
systems, like expert systems, which can solve various engineering
problems.
However, it is obvious that either of these two approaches can alone
fulfill requirements to future CAD systems. It is necessary to
develop an integrated environment for intelligent CAD systems using
intelligent interactive techniques. Therefore, we pursue integration
of these two approaches in order to build intelligent CAD systems, and
we discuss issues such asL
- configuration of intelligent CAD systems,
- tools and techniques for developing those systems,
- methodology for developing.
We plan a series of three workshops beginning in 1987
- 1987: Theoretical and methodological aspects in developing
an intelligent CAD system.
- 1988: Architecture of an intelligent CAD system.
- 1989: Practical experiences and evaluation of an intelligent
CAD system.
SCOPE OF THE FIRST WORKSHOP IN 1987
1. Principle and configuration of intelligent CAD systems
2. Theory and methodology of development
3. Available tools for development, such as intelligent user
interace management systems and tools for problem solving
in design
4. Use and role of intelligent user interface systems in an
intelligent CAD environment
STYLE OF WORKSHOP
Approximately 10 invited papers and 10 refereed papers will be
presented. Participants will be limited roughly 50. In this workshop,
theoretical and methodological aspects are emphasized. The result of
this workshop will be published by Springer-Verlag.
SCHEDULE FOR THE WORKSHOP
December 1, 1986: Deadline for extended abstracts upto 1000 words
January 1987: Notification of acceptance
March 1987: Acceptance of participation
April 22-24, 1987: Workshop (Full papers are submitted on site)
July 1987: Deadline for final manuscripts for publication
ORGANIZATION
Co-charimen P. J. W. ten hagen (CWI, NL)
T. Tomiyama (CWI, NL)
Secretary P. J. Veerkamp (CWI, NL)
Program Committee
F. Arbab (USC, USA)
P. Bernus (Hungarian Academy of Sciences, H)
A. Bijl (University of Edinburgh, UK)
J. Encarnacao (TH Darmstadt, D)
S. J. Fenves (CMU, USA)
D. Gossard (MIT, USA)
F. Kimura (The University of Tokyo, J)
T. Kjelberg (Royal Institute of Technology, S)
M. Mac an Airchinnigh (University of Dublin, IR)
K. MacCallum (University of Strathclyde, UK)
F. J. Schramel (Philips, NL)
D. Sriram (MIT, USA)
T. Takala (Helsinki Technical University, SF)
F. Tolman (TNO, NL)
H. Yoshikawa (The Unviersity of Tokyo, J)
INFORMATION
Please submit an extended abstract up to 1,000 words to:
Ms. Marja Hegt
Centre for Mathematics and Computer Science
Kruislaan 413, 1098 SJ Amsterdam, The Netherlands
Tel: (Overseas) +31-20-592-4058
Usenet: marja@mcvax.UUCP
------------------------------
End of AIList Digest
********************
∂09-Aug-86 0431 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #178
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 9 Aug 86 04:31:09 PDT
Date: Fri 8 Aug 1986 23:06-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #178
To: AIList@SRI-STRIPE
AIList Digest Saturday, 9 Aug 1986 Volume 4 : Issue 178
Today's Topics:
Queries - Hitachi Software Design ES & 3-D Geometry Theorem Prover,
Expert Systems - OPS5 Manual & Government Systems,
Humor - Geometric Placement and Gerrymandering,
Review - Computing with Neural Circuits,
Publishing - Petrocelli Books,
Programming Languages - Functional Programming Bibliography,
Philosophy - Conservation of Information & Rhetoric
----------------------------------------------------------------------
Date: Mon 4 Aug 86 20:35:31-CDT
From: CMP.BARC@R20.UTEXAS.EDU
Subject: Query: Hitachi Software Design ES
I looking for information on an expert system called MDL/MAD for large-
scale software design. I've heard a little bit about it already. It has
about 1500 rules concerned with relationships among design data, the design
decision-making procedure and the format for expressing design information.
It has been claimed to reduce design errors by 40% and specification/
correction time by 80%. It is still under development at Hitachi's Software
Development Laboratory and is two years away from commercial release.
The questions are: What does it really do and how does it work?
I would also be interested in related systems ICAS (Hitachi), DEA/I (NEC)
and SDEM/SDSS (Fujitsu).
Dallas Webster
------------------------------
Date: Fri, 8 Aug 86 19:15:11 pdt
From: dan@ads.ARPA (Dan Shapiro)
Subject: looking for a theorem prover in 3D geometry
A friend of mine is looking for pointers to work on the topic of theorem
proving in 3D geometry. The application is in AI applied to the
ellucidation of crystal structures within organic chemistry.
He would also be interested in pointers to CAD-like programs that
allow construction and visualization of repetative lattice structures.
If anyone has information that would help out, please respond to
me directly as Dan@ads-unix.Arpa
Dan Shapiro
------------------------------
Date: 4 Aug 86 17:21:13 EDT
From: Lee.Brownston@A.CS.CMU.EDU
Subject: OPS5 manual
Write to the Department of Computer Science, Carnegie-Mellon University,
Pittsburgh, PA 15213 to request the OPS5 User's Manual. Another source of
OPS5 information is "Programming Expert Systems in OPS5" by Brownston,
Farrell, Kant, and Martin (Addison-Wesley, 1985).
------------------------------
Date: Mon 4 Aug 86 22:38:23-PDT
From: Laws@SRI-STRIPE.ARPA
Subject: Expert Systems - The New Cop on the Beat
The FBI has developed Big Floyd, an expert system to assist in criminal
investigations. Similar programs are being developed to catch drug
smugglers and target potential terrorists. The EPA wants to identify
polluters; the Treasury Department is looking for money-laundering
banks; the Energy Department would like to find contractors who cut
corners; the Customs service is after drug smugglers; the IRS is
developing a system to spot tax cheaters; the Secret Service is working
on a classified system to point out potential presidential assassins;
and the FBI's National Center for the Analysis of Violent Crimes is
developing expert systems to identify potential serial killers,
arsonists, and rapists. Systems to target counterfeiters and bombers
are also being built. -- Michael Schrage, The Washington Post National
Weekly Edition, Vol. 3, No. 40, August 4, 1986, p. 6.
------------------------------
Date: 4 Aug 86 09:48:56 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Humor: Re: [Query - Geometric Placement]
Date: Fri, 25 Jul 86 14:27:33 PDT
From: trwrb!orion!gries@ucbvax.Berkeley.EDU (Harry A. Gries)
> Query: Does anyone know about any expert system (developed
> or under development) that relates to the placement
> of geometric objects in a plane? [* * *]
Another application would be in creating district boundaries for
congressional representatives. [* * *]
What's that rustling sound I hear from across the river, up there on
Beacon Hill? It must be Governor Gerry doing pinwheels in his grave,
and the entire Masschusetts House of representatives trembling in
their boots... (We better keep this quiet, or they're liable to pass
a law against AI :-)
------------------------------
Date: 4 Aug 86 23:57:30 GMT
From: decvax!mcnc!ecsvax!hes@ucbvax.berkeley.edu (Henry Schaffer)
Subject: Re: Computing with Neural Circuits:
A paper, "Computing with Neural Circuits: A Model" by John J.
Hopfield and David W. Tank is in the 8 Aug. 1986 issue of
Science (pp. 625-633.)
"A new conceptual framework and a minimization principle together
provide an understanding of computation in model neural circuits.
The circuits consist of nonlinear graded-response model neurons
organized into networks with effectively symmetric synaptic
connections. The neurons represent an approximation to biological
neurons in which a simplified set of important computational properties
is retained. Complex circuits solving problems similar to those
essential in biology can be analyzed and understood without the need
to follow the circuit dynamics in detail. Implementation of the model
with electronic devices will provide a class of electronic circuits of
novel form and function." (Abstract)
------------------------------
Date: Wed, 06 Aug 86 15:21:54 EDT
From: BENJY%VTVM1.BITNET@WISCVM.ARPA
Subject: Petrocelli Books
I recently received an advertisement for an AI book published by
Petrocelli Books, Inc. I assume Petrocelli will be publishing more
AI books, so I would like to post a warning to potential authors.
I wrote two books for this company, and I've never received a royalty
check without asking for it although the contract I signed states that
statements would be sent twice a year. Last time I asked for a
statement, I was told that PBI had cash flow problems and had to wait
several months after their grace period for a meager sum.
Ben Cline
Virginia Tech
BENJY@VTVM1.BITNET
------------------------------
Date: 8 Aug 86 15:30:13 GMT
From: ucbcad!nike!lll-crg!seismo!cmcl2!lanl!ls@ucbvax.berkeley.edu
(Lauren L Smith)
Subject: Functional Programming Bibliography
Andy Cheese's Functional Programming bibliography is ready for distribution
again. It covers all sorts of references relating to functional languages,
architectures for functional languages, to theory of, to garbage collecting,
to functional programming and multiprocessing, to logic programming &
functional combinations, to (well, you get the idea!). It has
been extensively updated since the last major distribution of it.
If you would be interested in receiving a copy (ONE request per site PLEASE!),
please contact the appropriate person.
North America: Lauren Smith
ARPA: ls@lanl
UUCP: {cmcl2,ihnp4}!lanl!ls
Everywhere Else: Andy Cheese
abc%computer-science.nottingham.ac.uk@Cs.Ucl.AC.UK
The bibliography is 24 files long, since it is too large to send as one
big file.
------------------------------
Date: Mon, 4 Aug 86 14:31:48 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: Conservation of Info, etc.
In article <8608010557.AA11269@ucbvax.Berkeley.EDU>, larry@JPL-VLSI.ARPA writes:
> The ability to quantify and measure pattern and shape has profound implica-
> tions for the study of formerly mystical topics such as intelligence. It
> means we can develop conservation laws for information, without which you
> can't construct an essential ingredient of mathematics, equations.
While I agree with much of the article, this assumption looks superfluous
to me. Computer programs are a kind of mathematics, and they use assign-
ments and functions rather than equations.
More generally, I should like to see discussed what "information" means
in the abstract sense. After all, anything can be said to contain all
conceivable information about itself. Is "information" meaningful apart
from communication?
------------------------------
Date: Mon, 4 Aug 86 15:30:05 EDT
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: follow-up on philosophy articles
Newsgroups: mod.ai
Subject: Re: philosophy journals
References: <8607211801.AA17444@ellie.SUNYAB>
<8608010555.AA11229@ucbvax.Berkeley.EDU>
Sender: William J. Rapaport (rapaport@buffalo.csnet)
Reply-To: rapaport@sunybcs.UUCP (William J. Rapaport)
Followup-To: The Colonel's complaint
Organization: SUNY/Buffalo Computer Science
In article <8608010555.AA11229@ucbvax.Berkeley.EDU>
colonel@buffalo.CSNET ("Col. G. L. Sicherman") writes:
>In article <8607211801.AA17444@ellie.SUNYAB>, rapaport@buffalo.CSNET
>("William J. Rapaport") writes:
>
>> The original version of the ... problem may be found in:
>> Jackson, "Epiphenomenal Qualia," ←Philosophical Q.← 32(1982)127-136.
>> with replies in:
>> Churchland, "Reduction, Qualia, and the Direct Introspection of
>> Brain States," ←J. of Philosophy← 82(1985)8-28.
>> Jackson, "What Mary Didn't Know," ←J. of Philosophy← 83(1986)291-95.
>> (One of the reasons I stopped reading net.philosophy was that its
>> correspondents seemed not to know about what was going on in philosophy
>> journals!)
>
>Out of curiosity I hunted up the third article on the way back from lunch.
>It's aggressive and condescending; any sympathy I might have felt for
>the author's argument was repulsed by his sophomoric writing. I hope it's
>not typical of the writing in philosophy journals.
I don't quite understand what "aggressive and condescending" or
"sophomoric writing" have to do with philosophical argumentation.
One thing that philosophers try not to do is give ad hominem arguments.
A philosophical arguement stands or falls on its logical merits, not its
rhetoric.
------------------------------
End of AIList Digest
********************
∂12-Aug-86 1821 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #179
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 12 Aug 86 18:21:49 PDT
Date: Tue 12 Aug 1986 15:11-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #179
To: AIList@SRI-STRIPE
AIList Digest Wednesday, 13 Aug 1986 Volume 4 : Issue 179
Today's Topics:
Administrivia - Vacation,
Queries - AI Expert & Expert Systems and Maintenance Planning,
Expert Systems - ACE,
Games - 'Go' Challenge
----------------------------------------------------------------------
Date: Tue 12 Aug 86 00:44:30-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Administrivia - Vacation
I'm off to Singapore and Malaysia for a month. AIList will
resume in mid September.
-- Ken Laws
------------------------------
Date: Tue, 12 Aug 86 17:08:36 est
From: munnari!trlamct.oz!andrew@seismo.CSS.GOV (Andrew Jennings)
Subject: AI expert
I've just received a small ad for "AI expert" : the first commercial
magazine ever. Has anybody seen a copy : comments ?
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ACSNET: andrew@trlamct.trl VOICE: +1 61 3 5416241
UUCP: ...!{seismo, mcvax, ucb-vision, ukc}!munnari!trlamct.trl!andrew
ARPA: andrew%trlamct.trl.oz@seismo.css.gov
Dr. Andrew Jennings , Section Head, Applied Mathematics and Computer
Techniques Section,
Telecom Australia Research Laboratories,
P.O. Box 249
Clayton, Victoria 3168, AUSTRALIA.
------------------------------
Date: 7 Aug 86 13:14:48 GMT
From: ucbcad!nike!sri-spam!mordor!lll-crg!seismo!mcvax!kvvax4!rolfs@uc
bvax.berkeley.edu (Rolf Skatteboe)
Subject: expert systems and maintenance planning
Hello:
I'm just starting on a project which main goal is to evaluate the
possibilities of combining the use of knowledge based systems with
maintenance planning.
So far, I have found very little work done in this field so everything
will be of interest. It is however, maintenance planning of rotating
machinery which is the main interest group.
I like to get hold on everything: articles, program descriptions and things
like that. If someone is interested in further information about the
project, please let me know.
Thank you ---Grethe
------------------------------
Date: 11 Aug 86 16:35:00 GMT
From: hplabs!hplabsb!wiemann@ucbvax.berkeley.edu (Alan Wiemann)
Subject: Re: expert systems and maintenance planning
Grethe, Bell Labs developed a system for maintenance of cable which I think
was called ACE. Try there for starters.
Alan L. Wiemann
HP Labs
------------------------------
Date: 8 Aug 86 18:16:34 GMT
From: nbires!vianet!devine@ucbvax.berkeley.edu (Bob Devine)
Subject: 'Go' challenge
Here is a news item that was published in August 5th "PC Week":
You can be $1 million richer if you're the first
person to devise a Go program that can beat a human
expert.
MultiTech Inc., Taiwan's largest manufacturer of
personal computers, is sponsoring the contest in
conjuction with the Taiwanese Ing Chang-chi Weich'i
Educational Foundation.
MultiTech says that its motives are:
1. to create an awareness of the Chinese origins of the
game Go and to increase interest in the game;
2. to spur development of computer hardware, software
and artificial intelligence; and
3. to increase international awareness of progress in
the Taiwanese computer industry.
The contest was inspired by a similar one that began about
30 years ago and which promised to award its prize to the
author of the first chess program that could be a human
master. That contest lasted for six years before the
prize was won -- a whopping 2,000 pounds sterling (about
$10,000).
The computer/Go contest will be staged annually from now
until the end of the century, according to MultiTech.
------------------------------
End of AIList Digest
********************
∂16-Sep-86 0515 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #180
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 16 Sep 86 05:15:02 PDT
Date: Tue 16 Sep 1986 02:14-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #180
To: AIList@SRI-STRIPE
AIList Digest Tuesday, 16 Sep 1986 Volume 4 : Issue 180
Today's Topics:
Administrivia - Resumption of Service,
AI Tools - C,
Expert Systems - Matching,
Philosophy - Argumentation Style & Sports Analogy,
Physiology - Rate of Tissue Replacement
----------------------------------------------------------------------
Date: Tue 16 Sep 86 01:28:28-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Resumption of Service
I'm back from vacation and almost have the mail streams under control
again. This issue clears out "old business" messages relating to the
discussions in early August. I'll follow with digests flushing the
accumulated queries, Usenet replies, news items, conference and
seminar abstracts, and bibliographic citations -- spread out a bit so
that I'm not deluged with mailer bounce messages from readers who have
dropped without notification. Incidentally, about 30 people signed up
for direct distribution this month despite the inactivity of the list.
(Most of the additions for the last year have been on BITNET, often in
clusters as new universities join the net or become aware of the
Arpanet digests. Most Arpanet and CSNet sites are now using bboards
and redistribution lists or are making use of the Usenet mod.ai/net.ai
distribution.)
I plan to pass along only an abbreviated announcement for conferences
that have already been announced in the NL-KR, IRList, or Prolog lists
-- you can contact the message author if you need the full text.
(Note that this may reduce the yield of keyword searches through the
AIList archive; future historians will have to search the other lists
to get a full picture of AI activity. Anyone building an intelligent
mail-screening system should also incorporate cross-list linkages.
Any such screening system that can understand and coordinate these
message streams deserves a Turing award.)
-- Ken Laws
------------------------------
Date: Wed, 20 Aug 86 10:07:49 edt
From: cdx39!jc%rclex.UUCP@harvard.HARVARD.EDU
Subject: Re: Reimplementing in C
> I've been hearing and seeing something for the past couple years,
> something that seems to be becoming a folk theorem. The theorem goes
> like this:
> Many expert systems are being reimplemented in C.
> I'm curious what the facts are.
[I program in C, and have reached the conclusion that most AI
programming could be done in that language as easily as in LISP
if libraries of list-oriented subroutines were available. (They
needn't be consed lists -- I use dynamically allocated arrays.)
You do have to worry about storage deallocation, but that buys you
considerable run-time efficiency. You also lose the powerful
LISP debugging environment, so fill your code with lots of
argument checks and ASSERTs. Tail recursion isn't optimized,
so C code should use iteration rather than recursion for most
array-based list traversals. Data-driven and object-oriented
coding are easy enough, but you can't easily build run-time
"active objects" (i.e., procedures to be applied to message
arguments); compiled subroutines have to do the work, and dynamic
linking is not generally worth the effort. I haven't tried much
parsing or hierarchy traversal, but programs such as LEX, YACC,
and MAKE show that it can be done. -- KIL]
Well, now, I don't know about re-implementing in C, but I myself
have been doing a fair amount of what might be called "expert
systems" programming in C, and pretty much out of necessity.
This is because I've been working in the up-and-coming world
of networks and "intelligent" communication devices. These
show much promise for the future; unfortunately they also
add a very "interesting" aspect to the job of an application
(much less a system) programmer.
The basic problem is that such comm devices act like black
boxes with a very large number of internal states; the states
aren't completely documented; those that are documented are
invariably misunderstood by anyone but the people who built
the boxes; and worst of all, there is usually no reliable
way to get the box into a known initial state.
As a result, there is usually no way to write a simple,
straightforward routine to deal with such gadgets. Rather,
you are forced to write code that tries to determine 1)
what states a given box can have; 2) what state it appears
to be in now; and 3) what sort of command will get it from
state X to state Y. The debugging process involves noting
unusual responses of the box to a command, discussing the
"new" behavior with the experts (the designers if they are
available, or others with experience with the box), and
adding new cases to your code to handle the behavior when
it shows up again.
One of the simplest examples is an "intelligent ACU", which
we used to call a "dial-out modem". These now contain their
own processor, plus sufficiently much ROM and RAM to amount
to small computer systems of their own. Where such boxes
used to have little more than a status line to indicate the
state of a line (connected/disconnected), they now have an
impressive repertoire of commands, with a truly astonishing
list of responses, most of which you hope never to see. But
your code will indeed see them. When your code first talks
to the ACU, the responses may include any of:
1. Nothing at all.
2. Echo of the prompt.
3. Command prompt (different for each ACU).
4. Diagnostic (any of a large set).
Or the ACU may have been in a "connected" state, in which
case your message will be transmitted down the line, to be
interpreted by whatever the ACU was connected to by the most
recent user. (This recursive case is really fun!:-)
The last point is crucial: In many cases, you don't know
who is responding to your message. You are dealing with
chains of boxes, each of which may respond to your message
and/or pass it on to the next box. Each box has a different
behaviour repertoire, and even worse, each has a different
syntax. Furthermore, at any time, for whatever reason
(such as power glitches or commands from other sources),
any box may reset its internal state to any other state.
You can be talking to the 3rd box in a chain, and suddenly
the 2nd breaks in and responds to a message not intended
for it.
The best way of handling such complexity is via an explicit
state table that says what was last sent down the line, what
the response was, what sort of box we seem to be talking to,
and what its internal state seems to be. The code to use such
info to elicit a desired behavior rapidly develops into a real
piece of "expert-systems" code.
So far, there's no real need for C; this is all well within the
powers of Lisp or Smalltalk or Prolog. So why C? Well, when
you're writing comm code, you have one extra goodie. It's very
important that you have precise control over every bit of every
character. The higher-level languages always seen to want to
"help" by tokenizing the input and putting the output into some
sort of standard format. This is unacceptable.
For instance, the messages transmitted often don't have any
well-defined terminators. Or, rather, each box has its own
terminator(s), but you don't know beforehand which box will
respond to a given message. They often require nulls. It's
often very important whether you use CR or LF (or both, in
a particular order). And you have to timeout various inputs,
else your code just hangs forever. Such things are very awkward,
if not impossible to express in the typical AI languages.
This isn't to say that C is the world's best AI language; quite
the contrary. I'd love to get a chance to work on a better one.
(Hint, hint....) But given the languages available, it seems
to be the best of a bad lot, so I use it.
If you think doing it in C is weird, just wait 'til
you see it in Ada....
------------------------------
Date: 2 Sep 86 08:31:00 EST
From: "CLSTR1::BECK" <beck@clstr1.decnet>
Reply-to: "CLSTR1::BECK" <beck@clstr1.decnet>
Subject: matching
Mr. Rosa, is correct in saying that "the obstacles to implementation are not
technological," since this procedure is currently being implemented. See
"matches hit civil servants hardest" in the august 15, 1986 GOVERNMENT COMPUTER
NEWS. "Computer Matching/Matches" is defined as searching the available data for
addresses, financial information, specific personal identifiers and various
irregularities". The congressional Office of Technology Assessment has recently
issued a report, "Electronic Record Systems and Individual Privacy" that
discusses matching.
My concern with this is how will the conflicting rules of society be reconciled
to treat the indiviual fairly. Maybe the cash society and anonymous logins will
become prevalent. Do you think that the falling cost of data will force data
keepers to do more searches to justify their existence? Has there been any
discussion of this topic?
peter beck <beck@ardec-lcss.arpa>
------------------------------
Date: Tue, 12 Aug 86 13:20:20 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: philosophy articles
> >Out of curiosity I hunted up [Jackson, "What Mary Didn't Know," ←J.
> >of Philosophy← 83(1986) 291-295] on the way back from lunch.
> >It's aggressive and condescending; any sympathy I might have felt for
> >the author's argument was repulsed by his sophomoric writing. I hope it's
> >not typical of the writing in philosophy journals.
>
> I don't quite understand what "aggressive and condescending" or
> "sophomoric writing" have to do with philosophical argumentation.
> One thing that philosophers try not to do is give ad hominem arguments.
> A philosophical arguement stands or falls on its logical merits, not its
> rhetoric.
That's an automatic reaction, and I think it's unsound. Since we're
not in net.philosophy, I'll be brief.
Philosophers argue about logic, terminology, and their experience of
reality. There isn't really much to argue about where logic is
concerned: we all know the principles of formal logic, and we're
all writing sincerely about reality, which has no contradictions in
itself. What we're really interested in is the nature of our exist-
ence; the logic of how we describe it doesn't matter.
One reason that Jackson's article irritated me is that he uses formal
logic, of the sort "Either A or B, but not A, therefore B." This kind
of argument insults the reader's intelligence. Jackson ought to know
that nobody is going to question the soundness of such logic, but that
all his opponents will question his premises and his definitions. More-
over, he appears to regard his premises and definitions as unassailable.
I call that stupid philosophizing.
Ad-hominem attacks may well help to discover the truth. When the man
with jaundice announces that everything is fundamentally yellow, you
must attack the man, not the logic. So long as he's got the disease,
he's right!
------------------------------
Date: Tue, 12 Aug 86 12:53:07 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: talk to the medium (from Risks Digest)
> Whether he was talking about the broadcast or the computer industry, he
> got the analogy wrong.
Of course--that's what makes the analogy "stick."
> If the subject is broadcasting, the sports analogy to a "programmer"
> is the guy that makes the play schedules.
Not exactly. McLuhan's "programmer" is the man who selects the content
of the medium, not what computer people call a programmer.
> ... But still, in computing,
> a programmer bears at least partial responsibility for the computer's
> (mis)behaviour.
I agree. McLuhan is writing about not responsibility but responsiveness.
Last Saturday I visited an apartment where a group of men and kids were
shouting at the TV set during a football game. It's a natural response,
and it would have been effective if TV were an interactive medium.
If you dislike this posting, will you complain to the moderator? To
the people who programmed netnews? To the editor of the New York
←Times?← Of course not; you must talk to the medium, not to the
programmer!
------------------------------
Date: Wed, 20 Aug 86 10:06:56 edt
From: cdx39!jc%rclex.UUCP@harvard.HARVARD.EDU
Subject: Re: "Proper" Study of Science, Conservation of Info
[The following hasn't any obvious AI, but it's interesting enough
to pass along. Commonsense reasoning at work. -- KIL]
> The ability to quantify and measure ... has profound implications ...
>
> ... A decade from now it's likely that none of our bodies
> will contain EVEN A SINGLE ATOM now in them. Even bones are fluid in
> biological organisms; ...
OK, let's do some BOTE (Back Of The Envelope) calculations.
According to several bio and med texts I've read over the
years, a good estimate of the half-life residency of an atom
in the soft portions of a mammal's body is 1/2 year; in the
bones it is around 2 years. The qualifications are quite
obvious and irrelevant here; we are going for order-of-magnitude
figures.
For those not familiar with the term, "half-life residency"
means the time to replace half the original atoms. This
doesn't mean that you replace half your soft tissues in
6 months, and the other half in the next six months. What
happens is exponential: in one year, 1/4 of the original
are left; in 18 months, 1/8 are left, and so on.
Ten years is about 5 half-lives for the bones, and 20 for the
soft tissues. A human body masses about 50 Kg, give or take
a factor of 2. The soft tissues are primarily water (75%)
and COH2; we can treat it all as water for estimating the
number of atoms. This is about (50Kg) * (1000 KG/g) / (16
g/mole) = 3000 moles, times 6*10↑23 gives us about 2*10↑26
atoms. The bones are a bit denser (with fewer atoms per
gram); the rest is a bit less dense (with more atoms per
gram), but it's about right. For order-of-magnitude estimates,
we would have roughly 10↑26 atoms in each kind of tissue.
In 5 half-lives, we would divide this by 2↑5 = 32 to get the
number of original atoms, giving us about 7*10↑25 atoms of the
bones left. For the soft tissues, we divide by 2↑20 = 4*10↑6,
giving us about 2 or 3 * 10↑20 of the original atoms.
Of course, although these are big numbers, they don't amount to
much mass, especially for the soft tissues. But they are a lot
more than a single atom, even if they are off by an order of
magnitude..
Does anyone see any serious errors in these calculations? Remember
that these are order-of magnitude estimates; quibbling with anything
other than the first significant digit and the exponent is beside
the point. The only likely source of error is in the half-life
estimate, but the replacement would have to be much faster than a
half-year to stand a chance of eliminating every atom in a year.
In fact, with the exponential-decay at work here, it is easy
to see that it would take about 80 half-lives (2*10↑26 = 2↑79)
to replace the last atom with better than 50% probability.
For 10 years, this would mean a half-life residency of about
6 weeks, which may be true for a mouse or a sparrow, but I've
never seen any hint that human bodies might replace themselves
nearly this fast.
In fact, we can get a good upper bound on how fast our atoms
could be replaced, as well as a good cross-check on the above
rough calculations, by considering how much we eat. A normal
human diet is roughly a single Kg of food a day. (The air
breathed isn't relevant; very little of the oxygen ends up
incorporated into tissues.) In 6 weeks, this would add up to
about 50 Kg. So it would require using very nearly all the
atoms in our food as replacement atoms to do the job required.
This is clearly not feasible; it is almost exactly the upper
bound, and the actual figure has to be lower. A factor of 4
lower would give us the above estimate for the soft tissues,
which seems feasible.
There's one more qualification, but it works in the other
direction. The above calculations are based on the assumption
that incoming atoms are all 'new'. For people in most urban
settings, this is close enough to be treated as true. But
consider someone whose sewage goes into a septic tank and
whose garbage goes into a compost pile, and whose diet is
based on produce of their garden, hen-house, etc. The diet
of such people will contain many atoms that have been part
of their bodies in previous cycles, especially the C and N
atoms, but also many of the O and H atoms. Such people could
retain a significantly larger fraction of original atoms
after a decade.
Please don't take this as a personal attack. I just couldn't
resist the combination of the quoted lines, which seemed to
be a clear invitation to do some numeric calculations. In
fact, if someone has figures good to more places, I'd like
to see them.
------------------------------
End of AIList Digest
********************
∂17-Sep-86 1608 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #181
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 17 Sep 86 16:07:52 PDT
Date: Wed 17 Sep 1986 09:14-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #181
To: AIList@SRI-STRIPE
AIList Digest Wednesday, 17 Sep 1986 Volume 4 : Issue 181
Today's Topics:
Conferences - ACM Office Information Systems &
IEEE Symposium on Logic Programming '86
----------------------------------------------------------------------
Date: Mon, 15 Sep 86 23:14:51 edt
From: rba@petrus.bellcore.com (Robert B. Allen)
Subject: Conference on Office Information Systems - Brown U.
ACM CONFERENCE ON OFFICE INFORMATION SYSTEMS
October 6-8, 1968, Providence, R.I.
Conference Chair: Carl Hewitt, MIT
Program Chair: Stan Zdonik, Brown University
Keynote Speaker: J.C.R. Licklider, MIT
Distinguished Lecturer: A. van Dam, Brown University
COIS is a major research conference on the design and use of
computing systems for professional and knowledge workers.
At this meeting, sessions and panels emphasize AI and
organizational models of offices as sites for distributed information
processing. Other themes include user interfaces,
graphics, group cooperation, and object-oriented systems.
For more information, call the Conference Registrar at Brown U.
(401-813-1839), or send electronic mail to mhf@brown.CSNET.
------------------------------
Date: Tue, 9 Sep 86 23:50:34 MDT
From: keller@utah-cs.ARPA (Bob Keller)
Subject: Conference - SLP '86
We have requested, and the IEEE has agreed, that
Symposium registrations be accepted at the "early" fee for a
couple of more days, so please act immediately
if you wish to exploit this rate.
[Sorry for the delay -- AIList doesn't always function in
real time. -- KIL]
Hotel Reservations: phone 801-531-1000, telex 389434
The (nearly) final schedule:
SLP '86
Third IEEE Symposium on
LOGIC PROGRAMMING
September 21-25, 1986
Westin Hotel Utah
Salt Lake City, Utah
SUNDAY, September 21
19:00 - 22:00 Symposium and tutorial registration
MONDAY, September 22
08:00 - 09:00 Symposium and tutorial registration
09:00 - 17:30 TUTORIALS (concurrent) Please see abstracts later.
George Luger Introduction to AI Programming in Prolog
University of New Mexico
David Scott Warren Building Prolog Interpreters
SUNY, Stony Brook
John Conery Theory of Parallelism, with Applications to
University of Oregon Logic Programming
12:00 - 17:30 Exhibit set up time
18:00 - 22:00 Symposium registration
20:00 - 22:00 Reception
TUESDAY, September 23
08:00 - 12:30 Symposium registration
09:00 Exhibits open
09:00 - 09:30 Welcome and announcements
09:30 - 10:30 INVITED SPEAKER:
W. W. Bledsoe, MCC
Some Thoughts on Proof Discovery
11:00 - 12:30 SESSION 1: Applications
(Chair: Harvey Abramson)
The Logic of Tensed Statements in English -
an Application of Logic Programming
Peter Ohrstrom, University of Aalborg
Nils Klarlund, University of Aarhus
Incremental Flavor-Mixing of Meta-Interpreters for
Expert System Construction
Leon Sterling and Randall D. Beer
Case Western Reserve University
The Phoning Philosopher's Problem or
Logic Programming for Telecommunications Applications
J.L. Armstrong, N.A. Elshiewy, and R. Virding
Ericsson Telecom
14:00 - 15:30 SESSION 2: Secondary Storage
(Chair: Maurice Bruynooghe)
EDUCE - A Marriage of Convenience:
Prolog and a Relational DBMS
Jorge Bocca, ECRC, Munich
Paging Strategy for Prolog Based Dynamic Virtual Memory
Mark Ross, Royal Melbourne Institute of Technology
K. Ramamohanarao, University of Melbourne
A Logical Treatment of Secondary Storage
Anthony J. Kusalik, University of Saskatchewan
Ian T. Foster, Imperial College, London
16:00 - 17:30 SESSION 3: Compilation
(Chair: Richard O'Keefe)
Compiling Control
Maurice Bruynooghe, Danny De Schreye, Bruno Krekels
Katholieke Universiteit Leuven
Automatic Mode Inference for Prolog Programs
Saumya K. Debray, David S. Warren
SUNY at Stony Brook
IDEAL: an Ideal DEductive Applicative Language
Pier Giorgio Bosco, Elio Giovannetti
C.S.E.L.T., Torino
17:30 - 19:30 Reception
20:30 - 22:30 Panel (Wm. Kornfeld, moderator)
Logic Programming for Systems Programming
Panelists: Steve Taylor, Weizmann Institute
Steve Gregory, Imperial College
Bill Wadge
A researcher from ICOT
(sorry this is incomplete)
WEDNESDAY, September 24
09:00 - 10:00 INVITED SPEAKER:
Sten Ake Tarnlund, Uppsala University
Logic Programming - A Logical View
10:30 - 12:00 SESSION 4: Theory
(Chair: Jean-Louis Lassez)
A Theory of Modules for Logic Programming
Dale Miller
University of Pennsylvania
Building-In Classical Equality into Prolog
P. Hoddinott, E.W. Elcock
The University of Western Ontario
Negation as Failure Using Tight Derivations
for General Logic Programs
Allen Van Gelder
Stanford University
13:30 - 15:00 SESSION 5: Control
(Chair: Jacques Cohen)
Characterisation of Terminating Logic Programs
Thomas Vasak, The University of New South Wales
John Potter, New South Wales Institute of Technology
An Execution Model for Committed-Choice
Non-Deterministic Languages
Jim Crammond
Heriot-Watt University
Timestamped Term Representation in Implementing Prolog
Heikki Mannila, Esko Ukkonen
University of Helsinki
15:30 - 22:00 Excursion
THURSDAY, September 25
09:00 - 10:30 SESSION 6: Unification
(Chair: Uday Reddy)
Refutation Methods for Horn Clauses with Equality
Based on E-Unification
Jean H. Gallier and Stan Raatz
University of Pennsylvania
An Algorithm for Unification in Equational Theories
Alberto Martelli, Gianfranco Rossi
Universita' di Torino
An Implementation of Narrowing: the RITE Way
Alan Josephson and Nachum Dershowitz
University of Illinois at Urbana-Champaign
11:00 - 12:30 SESSION 7: Parallelism
(Chair: Jim Crammond)
Selecting the Backtrack Literal in the
AND Process of the AND/OR Process Model
Nam S. Woo and Kwang-Moo Choe
AT & T Bell Laboratories
Distributed Semi-Intelligent Backtracking for a
Stack-based AND-parallel Prolog
Peter Borgwardt, Tektronix Labs
Doris Rea, University of Minnesota
The Sync Model for Parallel Execution of Logic Programming
Pey-yun Peggy Li and Alain J. Martin
California Institute of Technology
14:00 - 15:30 SESSION 8: Performance
Redundancy in Function-Free Recursive Rules
Jeff Naughton
Stanford University
Performance Evaluation of a Storage Model for
OR-Parallel Execution
Andrzej Ciepelewski and Bogumil Hausman
Swedish Institute of Computer Science (SICS)
MALI: A Memory with a Real-Time Garbage Collector
for Implementing Logic Programming Languages
Yves Bekkers, Bernard Canet, Olivier Ridoux, Lucien Ungaro
IRISA/INRIA Rennes
16:00 - 17:30 SESSION 9: Warren Abstract Machine
(Chair: Manuel Hermenegildo)
A High Performance LOW RISC Machine
for Logic Programming
J.W. Mills
Arizona State University
Register Allocation in a Prolog Machine
Saumya K. Debray
SUNY at Stony Brook
Garbage Cut for Garbage Collection of Iterative Programs
Jonas Barklund and Hakan Millroth
Uppsala University
EXHIBITS:
An exhibit area including displays by publishers, equipment
manufacturers, and software houses will accompany the Symposium.
The list of exhibitors includes: Arity, Addison-Wesley, Elsevier,
Expert Systems, Logicware, Overbeek Enterprises, Prolog Systems,
and Quintus. For more information, please contact:
Dr. Ross A. Overbeek
Mathematics and Computer Science Division
Argonne National Laboratory
9700 South Cass Ave.
Argonne, IL 60439
312/972-7856
ACCOMODATIONS:
The Westin Hotel Utah is a gracious turn of the century hotel
with Mobil 4-Star and AAA 5-Star ratings. The Temple Square
Hotel, located one city block away, offers basic comforts for
budget-conscious attendees.
MEALS AND SOCIAL EVENTS:
Symposium registrants (excluding students and retired members)
will receive tickets for lunches on September 23, 24, and 25,
receptions on September 22 and 23, and an excursion the afternoon
of September 24. The excursion will comprise a steam train trip
through scenic Provo Canyon, and a barbeque at Deer Valley
Resort, Park City, Utah.
Tutorial registrants will receive lunch tickets for September 22.
TRAVEL:
The Official Carrier for SLP '86 is United Airlines, and the
Official Travel Agent is Morris Travel (361 West Lawndale Drive,
Salt Lake City, Utah 84115, phone 1-800-621-3535). Special
airfares are available to SLP '86 attendees. Contact Morris
Travel for details.
A courtesy limousine is available from Salt Lake International
Airport to both symposium hotels, running every half hour from
6:30 to 23:00. The taxi fare is approximately $10.
CLIMATE:
Salt Lake City generally has warm weather in September, although
evenings may be cool. A warm jacket should be brought for the
excursion. Some rain is normal this time of year.
SLP '86 Symposium and Tutorial Registration Coupon:
Advance symposium and tutorial registration is available until
September 1, 1986. No refunds will be made after that date. Send
a check or money order (no currency will be accepted) payable to
"Third IEEE Symposium on Logic Programming" to:
Third IEEE Symposium on Logic Programming
IEEE Computer Society
1730 Massachusetts Avenue, N.W.
Washington, D.C. 20036-1903
[...]
Symposium Registration: Advance On-Site
IEEE Computer Society members $185 $215
Non-members $230 $270
Full-time student members $ 50 $ 50
Full-time student non-members $ 65 $ 65
Retired members $ 50 $ 50
Tutorial Registration:
("Luger", "Warren", or "Ostlund")
Advance On-Site
IEEE Computer Society members $140 $170
Non-members $175 $215
SLP '86 Hotel Reservation:
Mail or Call: phone 801-531-1000, telex 389434
Westin Hotel Utah
Main and South Temple Streets
Salt Lake City, UT 84111
A deposit of one night's room or credit card guarantee is
required for arrivals after 6pm.
Room Rates:
Westin Hotel Utah Temple Square Hotel
single room $60 $30
double room $70 $36
Reservations must be made mentioning SLP '86 by August 31, 1986
to guarantee these special rates.
SLP '86 TUTORIAL ABSTRACTS
IMPLEMENTATION OF PROLOG INTERPRETERS AND COMPILERS
DAVID SCOTT WARREN
SUNY AT STONY BROOK
Prolog is by far the most used of various logic programming
languages that have been proposed. The reason for this is the
existence of very efficient implementations. This tutorial will
show in detail how this efficiency is achieved.
The first half of this tutorial will concentrate on Prolog
compilation. The approach is first to define a Prolog Virtual
Machine (PVM), which can be implemented in software, microcode,
hardware, or by translation to the language of an existing
machine. We will describe in detail the PVM defined by D.H.D.
Warren (SRI Technical Note 309) and discuss how its data objects
can be represented efficiently. We will also cover issues of
compilation of Prolog source programs into efficient PVM
programs.
ARTIFICIAL INTELLIGENCE AND PROLOG:
AN INTRODUCTION TO THEORETICAL
ISSUES IN AI WITH PROLOG EXAMPLES
GEORGE F. LUGER
UNIVERSITY OF NEW MEXICO
This tutorial is intended to introduce the important concepts of
both Artificial Intelligence and Logic Programming. To
accomplish this task, the theoretical issues involved in AI
problem solving are presented and discussed. These issues are
exemplified with programs written in Prolog that implement the
core ideas. Finally, the design of a Prolog interpreter as
Resolution Refutation system is presented.
The main ideas from AI problem solving that are presented
include: 1) An introduction of AI as representation and search.
2) An introduction of the Predicate Calculus as the main
representation formalism for Artificial Intelligence. 3) Simple
examples of Predicate Calculus representations, including a
relational data base. 4) Unification and its role both in
Predicate Calculus and Prolog. 5) Recursion, the control
mechanism for searching trees and graphs, 6) The design of search
strategies, especially depth first, breadth first and best first
or "heuristic" techniques, and 7) The Production System and its
use both for organizing search in a Prolog data base, as well as
the basic data structure for "rule based" Expert Systems.
The above topics are presented with simple Prolog program
implementations, including a Production System code for
demonstrating search strategies. The final topic presented is an
analysis of the Prolog interpreter and an analysis of this
approach to the more general issue of logic programming.
Resolution is considered as an inference strategy and its use in
a refutation system for "answer extraction" is presented. More
general issues in AI problem solving, such as the relation of
"logic" to "functional" programming are also discussed.
PARALLELISM IN LOGIC PROGRAMMING
JOHN CONERY
UNIVERSITY OF OREGON
The fields of parallel processing and logic programming have
independently attracted great interest among computing
professionals recently, and there is currently considerable
activity at the interface, i.e. in applying the concepts of
parallel computing to logic programming and, more specifically
yet, to Prolog. The application of parallelism to Logic
Programming takes two basic but related directions. The first
involves leaving the semantics of sequential programming, say
ordinary Prolog, as intact as possible, and uses parallelism,
hidden from the programmer, to improve execution speed. This has
traditionally been a difficult problem requiring very intelligent
compilers. It may be an easier problem with logic programming
since parallelism is not artificially made sequential, as with
many applications expressed in procedural languages. The second
direction involves adding new parallel programming primitives to
Logic Programming to allow the programmer to explicitly express
the parallelism in an application.
This tutorial will assume a basic knowledge of Logic Programming,
but will describe current research in parallel computer
architectures, and will survey many of the new parallel machines,
including shared-memory architectures (RP3, for example) and
non-shared-memory architectures (hypercube machines, for
example). The tutorial will then describe many of the current
proposals for parallelism in Logic Programming, including those
that allow the programmer to express the parallelism and those
that hide the parallelism from the programmer. Included will be
such proposals as Concurrent Prolog, Parlog, Guarded Horn Clauses
(GHC), and Delta-Prolog. An attempt will be made to partially
evaluate many of these proposals for parallelism in Logic
Programming, both from a pragmatic architectural viewpoint as
well as from a semantic viewpoint.
Conference Chairperson
Gary Lindstrom, University of Utah
Program Chairperson
Robert M. Keller, University of Utah
Local Arrangements Chairperson
Thomas C. Henderson, University of Utah
Tutorials Chairperson
George Luger, University of New Mexico
Exhibits Chairperson
Ross Overbeek, Argonne National Lab.
Program Committee
Francois Bancilhon, MCC
John Conery, U. of Oregon
Al Despain, U.C. Berkeley
Herve Gallaire, ECRC, Munich
Seif Haridi, SICS, Stockholm
Lynette Hirschman, SDC
Peter Kogge, IBM, Owego
William Kornfeld, Quintus Systems
Gary Lindstrom, University of Utah
George Luger, University of New Mexico
Rikio Onai, ICOT/NTT, Tokyo
Ross Overbeek, Argonne National Lab.
Mark Stickel, SRI International
Sten Ake Tarnlund, Uppsala University
------------------------------
End of AIList Digest
********************
∂17-Sep-86 2046 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #182
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 17 Sep 86 20:46:12 PDT
Date: Wed 17 Sep 1986 09:33-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #182
To: AIList@SRI-STRIPE
AIList Digest Wednesday, 17 Sep 1986 Volume 4 : Issue 182
Today's Topics:
Conference - ISMIS'86 program
----------------------------------------------------------------------
Date: Thu, 11 Sep 86 17:03 EST
From: ZEMANKOVA%tennessee.csnet@CSNET-RELAY.ARPA
Subject: Conference - ISMIS'86 program
PRELIMINARY PROGRAM
INTERNATIONAL SYMPOSIUM ON METHODOLOGIES FOR
INTELLIGENT SYSTEMS
October 22 - 25, 1986
Hilton Hotel
Knoxville, Tennessee
Sponsored by
* ACM Special Interest Group on Artificial Intelligence
in cooperation with
* University of Tennessee at Knoxville
* The Data Systems Research and Development Program
of Martin Marietta Energy Systems, and
Oak Ridge National Laboratory
* University of North Carolina at Charlotte
and hosted by
* The Procter and Gamble Company
CHAIRPERSONS
Zbigniew W. Ras (UTK and UNCC)
Maria Zemankova (UTK and UNCC)
SYMPOSIUM COORDINATOR
J. Robin B. Cockett (UTK)
ORGANIZING COMMITTEE
S. Chen (IUPUI) M. Emrich (ORNL)
G. Epstein (UNCC & Indiana) K. O'Kane (UTK)
J. Poore (Georgia Tech.& UTK) R. Yager (Iona)
PROGRAM COMMITTEE
P. Andrews (Carnegie-Mellon)
J. Bourne (Vanderbilt)
M. Fitting (CUNY)
B. Gaines (Calgary, Canada)
M. Gupta (Saskatchewan, Canada)
M. Karpinski (Bonn, West Germany)
E. Knuth (Budapest, Hungary)
S. Kundu (LSU)
W. Marek (Kentucky)
R. Michalski (Illinois-Urbana)
C. Negoita (CUNY)
R. Nelson (Case Western Reserve)
Z. Pawlak (Warsaw, Poland)
A. Pettorossi (Rome, Italy)
E. Sandewall (Linkoping, Sweden)
G. Shafer (Kansas)
M. Shaw (Calgary, Canada)
J. Tou (Florida)
PURPOSE OF THE SYMPOSIUM
This Symposium is intended to attract researchers
who are actively engaged both in theoretical and
practical aspects of intelligent systems. The goal
is to provide a platform for a useful exchange
between theoreticians and practitioners, and to
foster the crossfertilization of ideas in the
following areas:
* Expert Systems
* Knowledge Representation
* Logic for Artificial Intelligence
* Learning and Adaptive Systems
* Intelligent Databases
* Approximate Reasoning
There will be an exhibit of A.I. hardware and software
and of A.I. literature.
Symposium Proceedings will be published by ACM Press.
ISMIS 86 Symposium Schedule
Tuesday, October 21, 1986
=========================
6:00 pm - 9:00 pm Symposium Registration
7:00 pm - 9:00 pm Reception (Cash Bar)
6:00 pm - 9:00 pm Exhibits
Wednesday, October 22, 1986
===========================
8:00 am - 12:00 am Symposium Registration
ISMIS'86 Opening Session
9:00 am - 9:20 am
Session 1: Expert Systems
I1: Invited Papers
Chair: M. Emrich (ORNL)
9:20am - 10:05am
"Recent Developments in Expert Systems"
B. Buchanan (Stanford Univ.)
10:05am - 10:50am
"Generic Tasks in Artificial Intelligence and Mentalese"
B. Chandrasekaran (Ohio State Univ.)
A1: Contributed Papers
Chair: R. Cockett (UT Knoxville)
11:15am - 11:40am
"The Frame-Definition Language for Customizing the
Raffaello Structure-Editor in Host Expert Systems"
E. Nissan (Ben-Gurion, Israel)
11:40am - 12:05am
"Knowledge Base Organization in Expert Systems"
S. Frediani, L. Saitta (Torino, Italy)
12:05am - 12:30pm
"NESS: A Coupled Simulation Expert System"
K. Kawamura, G. Beale, J. Rodriguez-Moscoso, B.J. Hsieh,
S. Padalkar (Vanderbilt)
B1: Contributed Papers
Chair: J. Bourne (Vanderbilt)
11:15am - 11:40am
"Design of an Expert System for Utilization Research"
A. Zvieli, S.K. MacGregor, J.Z. Shapiro (LSU)
11:40am - 12:05am
"An Expert System for Dynamic Scheduling"
S. Floyd, D. Ford (Huntsville, Alabama)
12:05am - 12:30pm
"Beginners' Strategies in Example Based Expert Systems"
T. Whalen, B. Schott (Atlanta, Georgia)
12:30 pm - 2:00 pm Exhibits
Session 2: Intelligent Databases
I2: Invited Papers
Chair: W. Marek (UK Lexington)
2:00pm - 2:45pm
"Using Knowledge Representation for the Development
of Interactive Information Systems"
J. Mylopoulos (Toronto, Canada)
2:45pm - 3:30pm
"Acquisition of Knowledge from Data"
G. Wiederhold (Stanford Univ.)
A2: Contributed Papers
Chair: S. Kundu (LSU)
3:50pm - 4:15pm
"A Decidable Query Answering Algorithm for Circumscriptive
Theories"
T. Przymusinski (El Paso, Texas)
4:15pm - 4:40pm
"Fuzzy Knowledge Engineering Techniques in Scientific Document
Classification"
R. Lopez de Mantaras (Barcelona, Spain)
4:40pm - 5:05pm
"A Semantic and Logical Front-end to a Database System"
M. Rajinikanth, P.K. Bose (Texas Instruments, Dallas)
5:05pm - 5:30pm
"A Knowledge-Based Approach to Online Document Retrieval
System Design"
G. Biswas, J.C. Bezdek, R.L. Oakman (Columbia, S.C.)
5:30pm - 5:55pm
"Towards an Intelligent and Personalized Information Retrieval
System"
S.Myaeng, R.R. Korfhage (Southern Methodist, Texas)
6:00 pm - 7:30 pm Exhibits
7:30 pm - 10:00 pm Dinner Theatre
Karel Capek, R.U.R.
Thursday, October 23, 1986
==========================
Session 3: Approximate Reasoning
I3: Invited Papers
Chair: M. Zemankova (UT Knoxville)
9:00am - 9:45am
"Inductive Models under Uncertainty"
P. Cheeseman (NASA AMES and SRI)
9:45am - 10:30am
"The Concept of Generalized Assignment Statement and its
Application to Knowledge Representation in Fuzzy Logic"
L.A. Zadeh (Berkeley)
A3: Contributed Papers
Chair: B. Bouchon (Paris, France)
10:50am - 11:15am
"Expert System on a Chip: An Engine for Real-Time Approximate
Reasoning"
M. Togai (Rockwell International),
H. Watanabe (AT&T Bell Lab, Holmdel)
11:15am - 11:40am
"Selecting Expert System Frameworks within the Bayesian Theory"
S.W. Norton (PAR Government Systems Co., New Hartford)
11:40am - 12:05pm
"Inference Propagation in Emitter, System Hierarchies"
T. Sudkamp (Wright State)
12:05pm - 12:30pm
"Estimation of Minimax Values"
P. Purdom (Indiana), C.H. Tzeng (Ball State Univ.)
B3: Contributed Papers
Chair: E. Nissan (Ben-Gurion, Israel)
10:50am - 11:15am
"Aggregating Criteria with Quantifiers"
R.R. Yager (Iona College)
11:15am - 11:40am
"Approximating Sets with Equivalence Relations"
W. Marek (Kentucky), H. Rasiowa (Warsaw, Poland)
11:40am - 12:05pm
"Evidential Logic and Dempster-Shafer Theory"
S. Chen (UNC-Charlotte)
12:05pm - 12:30pm
"Propagating Belief Functions with Local Computations"
P.P. Shenoy, G. Shafer (Lawrence, Kansas)
12:30 pm - 2:00 pm Exhibits
Session 4: Logics for Artificial Intelligence
I4: Invited Papers
Chair: M. Fitting (CUNY)
2:00pm - 2:45pm
"Automated Theorem Proving: Mapping Logic into A.I."
D.W. Loveland (Duke Univ.)
2:45pm - 3:30pm
"Extensions to Functional Programming in Scheme"
D.A. Plaisted, J. W. Curry (UNC Chapel Hill)
A4: Contributed Papers
Chair: G. Epstein (UNC Charlotte)
3:50pm - 4:15pm
"Logic Programming Semantics using a Compact Data Structure"
M. Fitting (CUNY)
4:15pm - 4:40pm
"On the Relationship between Autoepistemic Logic and Parallel
Circumscription"
M. Gelfond, H. Przymusinska (El Paso, Texas)
4:40pm - 5:05pm
"A Preliminary Excursion Into Step-Logics"
J. Drapkin, D. Perlis (College Park, Maryland)
5:05pm - 5:30pm
"Tree Resolution and Generalized Semantic Tree"
S. Kundu (LSU)
5:30pm - 5:55pm
"An Inference Model for Inheritance Hierarchies with
Exceptions"
K. Whitebread (Honeywell, Minneapolis)
6:00 pm - 7:30 pm Exhibits
7:30 pm - 9:30 pm Symposium Banquet
Keynote Speaker: Brian Gaines (Calgary, Canada)
Friday, October 24, 1986
========================
Session 5: Learning and Adaptive Systems
I5: Invited Papers
Chair: Z. Ras (UT Knoxville)
8:45am - 9:30am
"Analogical Reasoning in Planning and Decision Making"
J. Carbonell (Carnegie-Mellon Univ.)
9:30am - 10:15am
"Emerging Principles in Machine Learning"
R. Michalski (Univ. of Illinois at Urbana)
A5: Contributed Papers
Chair: D. Perlis (Maryland)
10:35am - 11:00am
"Memory Length as a Feedback Parameter in Learning Systems"
G. Epstein (UNC-Charlotte)
11:00am - 11:25am
"Experimenting and Theorizing in Theory Formation"
B. Koehn, J.M. Zytkow (Wichita State)
11:25am - 11:50am
"On Learning and Evaluation of Decision Rules in the Context
of Rough Sets"
S.K.M. Wong, W. Ziarko (Regina, Canada)
11:50am - 12:15pm
"Taxonomic Ambiguities in Category Variations Needed to Support
Machine Conceptualization"
L.J. Mazlack (Berkeley)
12:15pm - 12:40pm
"A Model for Self-Adaptation in a Robot Colony"
T.V.D.Kumar, N. Parameswaran (Madras, India)
12:45 pm - 2:00 pm Symposium Luncheon
Keynote Speaker: Joseph Deken (NSF)
"Viable Inference Systems"
Session 6: Knowledge Representation
I6: Invited Papers
Chair: S. Chen (UNC Charlotte)
2:15pm - 3:00pm
"Self-Improvement in Problem-Solving"
R.B. Banerji (St. Joseph's Univ.)
3:00pm - 3:45pm
"Logical Foundations for Knowledge Representation in
Intelligent Systems"
B.R. Gaines (Calgary, Canada)
A6: Contributed Papers
Chair: M. Togai (Rockwell International)
4:00pm - 4:25pm
"Simulations and Symbolic Explanations"
D.H. Helman, J.L. Bennett, A.W. Foster (Case Western Reserve)
4:25pm - 4:50pm
"Notes on Conceptual Representations"
E. Knuth, L. Hannak, A. Hernadi (Budapest, Hungary)
4:50pm - 5:15pm
"Spaceprobe: A System for Representing Complex Knowledge"
J. Dinsmore (Carbondale, Ill)
5:15pm - 5:40pm
"Challenges in Applying Artificial Intelligence Methodologies
to Military Operations"
L.F. Arrowood, M.L. Emrich, M.R. Hilliard, H.L. Hwang
(Oak Ridge National Lab.)
B6: Contributed Papers
Chair: L. de Mantaras (Barcelona, Spain)
4:00pm - 4:25pm
"Knowledge-Based Processing/Interpretation of Oceanographic
Satellite Data"
M.G. Thomason, R.E. Blake (UTK), M. Lybanon (NTSL)
4:25pm - 4:50pm
"A Framework for Knowledge Representation and use in Pattern
Analysis"
F. Bergadano, A. Giordana (Torino, Italy)
4:50pm - 5:15pm
"Algebraic Properties of Knowledge Representation Systems"
J.W. Grzymala-Busse (Lawrence, Kansas)
5:15pm - 5:40pm
"Prime Rule-based Methodologies Give Inadequate Control"
J.R.B. Cockett, J. Herrera (UTK)
ISMIS'86 Closing Session
5:45pm - 6:00pm
Saturday, October 25, 1986
==========================
9:00 am - 12:30 pm Colloquia (parallel sessions)
1:30 pm - 7:30 pm Trip to the Smoky Mountains
SYMPOSIUM FEES
Advance Symposium Registration
Received by September 15, 1986
Member of ACM $220.00
Non-member $250.00
Student* $ 30.00
Late or On-Site Registration
Member of ACM $265.00
Non-member $295.00
Student* $ 40.00
Additional Tickets
Reception $ 5.00
Dinner Theatre $ 25.00
Symposium Banquet $ 25.00
Symposium Luncheon $ 10.00
Trip to Smoky Mountains $ 25.00
Symposium registration fee includes the Proceedings (available at
the Symposium), continental breakfasts, reception, dinner theatre,
symposium banquet, symposium luncheon, coffee breaks.
* Student registration includes only coffee breaks. Students
registration limited, hence students should register early.
ACCOMMODATIONS:
A block of rooms has been reserved for the symposium at the
Hilton Hotel. The ISMIS 86 rate for a single occupancy is $47.00
and double occupancy $55.00. To reserve your room, contact the
Hilton Hotel, 501 Church Avenue, S.W., Knoxville, TN 37902-2591,
telephone 615-523-2300 by September 30, 1986. The Hilton Hotel
will continue to accept reservations after this date on a space
availability basis at the ISMIS 86 rates. However, you are
strongly encouraged to make your reservations by the cutoff date
of September 30.
Reservation must be accompanied by a deposit of one night's room
rental.
TRANSPORTATION:
The Hilton Hotel provides a free limousine service from and to
the airport.
If arriving by your vehicle, all overnight guests receive free
parking.
SPECIAL AIRFARE RATES:
DELTA Airlines has been designated as the official carrier for the
Symposium. Attendees arranging flights with DELTA will receive a
35% discount off the regular coach fare to Knoxville. To take
advantage of this speical rate call (toll-free) 1-800-241-6760,
referring to FILE #J0170. This number is staffed from 8:00 a.m.
to 8:00 p.m. EDT, seven days per week.
GENERAL INFORMATION:
Knoxville is located in East Tennessee, the area that is noted for its
abundant water reservoirs, rivers, mountains, hardwood forests and
wildlife refuges. The Great Smoky Mountains National Park, the
Cumberland Mountains, the resort city of Gatlinburg, and the Oak
Ridge Museum of Science and Energy are all within an hours drive
from the downtown area. The Fall season offers spectacular views
of radiant colors within the city and the surrounding contryside.
Interstates 40 and 75 provide access into Knoxville.
REGISTRATION FORM:
For the registration form, please write to
UTK Departments of Conferences
2014 Lake Avenue
Knoxville, TN 37996-3910
FURTHER INFORMATION:
Further information can be obtained from:
Zbigniew W. Ras Maria Zemankova
Dept. of Computer Science Dept. of Computer Science
University of North Carolina University of Tennessee
Charlotte, NC 28223 Knoxville, TN 37996-1301
(704) 597-4567 (615) 974-5067
ras%unccvax@mcnc.CSNET zemankova@utenn.CSNET
------------------------------
End of AIList Digest
********************
∂18-Sep-86 0321 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #183
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 18 Sep 86 03:18:56 PDT
Date: Wed 17 Sep 1986 10:03-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #183
To: AIList@SRI-STRIPE
AIList Digest Wednesday, 17 Sep 1986 Volume 4 : Issue 183
Today's Topics:
Queries - Space/Military Expert Systems & Communications/Control ES &
Structured Analysis References & NuBus-to-VME Adapter &
Robotic Cutting Arm & Schematics Drafter & Mechanical Engineering ES &
OPS5 & IJCAI-87 Net Address & Common Lisp Flavors &
Looping in Belief Revision System & 2-D Math Editor
----------------------------------------------------------------------
Date: 16 Aug 86 15:39:32 GMT
From: mcvax!ukc!reading!brueer!ckennedy@seismo.css.gov (C.M.Kennedy )
Subject: CONTACT REQUIRED: SPACE OR MILITARY EXPERT SYSTEMS
CONTACT REQUIRED: SPACE OR MILITARY EXPERT SYSTEMS
I wish to contact someone (reasonably senior) who has worked on an expert
system in one of the following areas:
1. Space technology - monitoring, control, planning
2. Military Science - of particular interest is:
- prediction, e.g. modelling behaviour of states or terrorist
organisations and making predictions based on available
knowledge
- interpretation of sensor data. i.e. integrating raw data
from multiple sensors and giving a high-level "user-
friendly" interpretation of what is going on.
I wish to obtain the following information:
1. Postal address and telephone number (along with email address).
If possible: Times of day (and days of the week) when telephone
contact is convenient.
2. Details of how to obtain the following documentation (or better
still a direct mailing of it if this is convenient):
- TECHNICAL papers describing the architecture, knowledge
representation, inference engine, tools, language, machine
etc.
- papers giving the precise REQUIREMENTS of the system. If
this is not possible, a short summary will do.
3. Was the project successful? Were all the original requirements
satisfied? Has the system been used successfuly in an operational
environment?
4. What were the problems encountered and what has been learned from
the project?
I would also be interested to hear from someone who has done RESEARCH on
any of the above (or knows of someone who has).
Catriona Kennedy
Mail address: ckennedy@ee.brunel.ac.uk
------------------------------
Date: Mon 25 Aug 86 12:18:41-EDT
From: CAROZZONI@RADC-TOPS20.ARPA
Subject: Cooperative Expert System
The Decision Aids Section at Rome Air Development Center is performing an
in-house study to establish a technical baseline in support of an upcoming
(FY 87) procurement effort related to the design of a "cooperative" expert
system - i.e., one which supports both communication and more extensive,
knowledge-based cooperation between existing systems.
We are particularly interested in hearing about any work related to expert
system design, distributed AI, and models for communication and cooperation
that may be relevant to this effort. Please respond by net to
Hirshfield@RADC-multics, or write to Hirshfield at RADC/COAD, Griffiss Air
Force Base, NY 13441.
------------------------------
Date: Wed 10 Sep 86 15:49:36-CDT
From: Werner Uhrig <CMP.WERNER@R20.UTEXAS.EDU>
Subject: Communications Expert System - does anyone know more ?
[ from InfoWorld, Sep 8, page 16 ]
COMMUNICATIONS PROGRAM TO HELP NOVICES, EXPERTS
Smyran, Ga - A communications software pulisher said it wil sell an on-line
expert system that helps computer users solve data communications problems and
work out idiosyncracies in the interaction of popular communications hardware
and software.
Line Expert, which will sell for $49.95 when it is released October 1, will ask
users questions about their particular configuration and suggest solutions,
according to Nat Atwell, director of marketing for publisher Concept
Development Systems.
..........
------------------------------
Date: Mon, 8 Sep 86 22:25:30 cdt
From: Esmail Bonakdarian <bonak%cs.uiowa.edu@CSNET-RELAY.ARPA>
Subject: Expert Systems and Data Communication
I am working on my M.S. thesis which deals with the use of Expert Systems
in the area of Data Communications (e.g. help diagnose sources of
communication problems, help to "configure" components [DTE's and DCE's]
correctly, etc). I am curious to find out what knowledge based systems
(if any) exist that deal with this problem domain. I would very much
appreciate any pointers to literature or persons doing work in this area.
Thanks,
Esmail
------------------------------
Date: Wed, 20 Aug 86 9:45:15 EDT
From: Marty Hall <hall@hopkins-eecs-bravo.ARPA>
Subject: Wanted: References on Structured Analysis Inadequacies
We are looking for references that point out some of the inadequacies
of Structured Analysis methods (ala Yourdon, for instance) in a
Software Development Process for AI software. We have a couple of
references vouching for the utility of Rapid Prototyping and
Exploratory Programming (thanks, by the way, for those who pointed
me to some of these references), but not explicitly contrasting this
with the more traditional Structured Design/Analysis methods.
These references are needed by our AI group for a "Convince the Software
Managers" session. :-)
Any help greatly appreciated!
- Marty Hall
Arpa: hall@hopkins AI and Simulation Dept, MP E-315
UUCP: seismo!umcp-cs!aplcen!jhunix!ins←amrh Martin Marietta Baltimore Aerospace
103 Chesapeake Park Plaza
Baltimore, MD 21220
(301) 682-0917
------------------------------
Date: 15 Aug 86 14:32:00 GMT
From: pyrnj!mirror!datacube!berger@CAIP.RUTGERS.EDU
Subject: NuBus to VME adapter?
I figured this would be as good a place as any for the following question:
Anyone know of a NuBus to VMEbus adapter? Something to allow VMEbus
boards to plug into a NuBus? We want to beable to connect our
Image Processing boards into things like the TI explorer and LMI machines.
Bob Berger
Datacube Inc. 4 Dearborn Rd. Peabody, Ma 01960 617-535-6644
ihnp4!datacube!berger
{seismo,cbosgd,cuae2,mit-eddie}!mirror!datacube!berger
------------------------------
Date: Thu, 11 Sep 86 10:44 MST
From: McGuire@HIS-PHOENIX-MULTICS.ARPA
Subject: robotics query: cutting arm
Could anyone give possible sources for a robotic arm, to be attached to
a CAD/CAM system(such as Auto-Cad), driven by a micro, such as a PC/AT?
This arm would be used to cut stencils, maximum 3 feet diameter, so it
would have to be very strong or complex. Canadian sources preferred.
Thanks. M.McGuire, Calgary, Alberta.
------------------------------
Date: 19 Aug 86 12:41:39 edt
From: Raul Valdes-Perez <valdes@ht.ai.mit.edu>
Subject: schematics drafting request
I have designed and programmed a non-rule-based KBES that drafts
the schematic of a digital circuit (actually only the placement
part). To have an objective measure of the ability of this program,
I would like to compare its output with that of any other (perhaps
algorithmic) schematics drafter. I expect that a large CAD circuit
design package would have something like this.
Can anyone help me obtain access to such a drafter? (Please note
that this has little to do with a schematic *entry* program, nor
with a VLSI *layout* program.
Thanks in advance.
Raul E. Valdes-Perez or (valdes@mit-htvax.arpa)
MIT AI Lab, Room 833
545 Technology Square
Cambridge, MA 02139
------------------------------
Date: Wed, 3 Sep 86 08:16 CDT
From: Bennett@HI-MULTICS.ARPA
Subject: Looking for Expert Systems for Mechanical Engineering
A friend of mine is looking for pointers to work done in Expert Systems
for Mechanical Engineering --- specifically in the area of
Mechanical Design.
If anyone has any information that would help please send it directly
to me as Bennett at HI-Multics.
Bonnie Bennett (612)782-7381
------------------------------
Date: Mon, 25 Aug 86 22:54 EDT
From: EDMUNDSY%northeastern.edu@CSNET-RELAY.ARPA
Subject: Any OPS5 in PC ?
Does anyone know whether there is any OPS5 software package availiable in PC?
I would like to know where I can find it. Thanks!!!
------------------------------
Date: 27 Aug 86 10:49:18 GMT
From: ulysses!mhuxr!aluxp!prieto@ucbvax.Berkeley.EDU (PRIETO)
Subject: ATT 3B2/400, 3B5 CMU - OPS5
What are the OPS5 requirements to be used in small machines like
3B2/400, 3B5 vs. VAX 11/780? Storage, memory, etc. Is there OPS5
software executing in these types of machines? Can software development
for an expert system application be done in the smaller machines or is a
VAX needed?
aluxp!prieto
(215)770-3285
ps. I am interested in getting OPS 5 - where could I obtain it?
------------------------------
Date: 13 Aug 86 04:48:39 GMT
From: ucbcad!nike!lll-crg!micropro!ptsfa!jeg@ucbvax.berkeley.edu (John Girard)
Subject: IJCAI-87 ... usenet contacts
I am looking for a usenet or usenet compatible connection by which
I can inquire about the IJCAI program, ground rules and deadlines.
Please respond to
[ihnp4,dual,pyramid,cbosgd,bellcore,qantel]ptsfa!jeg
John Girard
USA: 415-823-1961 415-449-5745
------------------------------
Date: Mon, 8 Sep 86 18:33:09 -0100
From: mcvax!csinn!solvay@seismo.CSS.GOV (Jean Philippe Solvay)
Subject: flavors and Common Lisp
Hi Kenneth,
Do yo know if there is any implementation of flavors in Common Lisp currently
available (public domain, if possible)?
Thanks in advance,
Jean-Philippe Solvay.
inria!csinn!solvay@mcvax.UUCP
------------------------------
Date: Mon, 08 Sep 86 16:48:15 -0800
From: Don Rose <drose@CIP.UCI.EDU>
Subject: TMS, DDB and infinite loops
Does anyone know whether the standard algorithms for belief revision
(e.g. dependency-directed backtracking in TMS-like systems) are
guaranteed to halt? That is, is it possible for certain belief networks
to be arranged such that no set of mutually consistent beliefs can be found
(without outside influence)? --Donald Rose
drose@ics.uci.edu
ICS Dept
Irvine CA 92717
------------------------------
Date: 0 0 00:00:00 PDT
From: "LLLASD::GARBARINI" <garbarini%lllasd.DECNET@lll-crg.arpa>
Reply-to: "LLLASD::GARBARINI" <garbarini@lllasd.decnet>
Subject: Availability of interactive 2-d math editing interfaces...
I am working with a number of other people on a project called Automatic
Programming for Physics. The goal is to build an AI based automatic
programming system to aid scientist in the building of numerical
simulations of physical systems.
In the user interface to the system we would like to have interactive
editing of mathematical expressions in two-dimensional form.
It seems a number of people have recently made much progress in this
area. (See C. Smith and N. Soiffer, "MathScribe: A User Interface for
Computer Algebra Systems," Conference Proceedings of Symsac 86, (July,
1986) and B. Leong, "Iris: Design of a User Interface Program for
Symbolic Algebra," Proc. 1986 ACM-SIGSAM Symposium on Symbolic and
Algebraic Manipulation, July 1986.)
Not wishing to reinvent the wheel, I'd appreciate receiving information
regarding the availability of any such interface.
Joe P. Garbarini Jr.
Lawrence Livermore National Lab
P. O. Box 808 , L-308
7000 East Avenue
Livermore Ca. , 94550
(415)-423-2808
arpanet address: GARBARINI%LLLASD.DECNET@LLL-CRG.ARPA
------------------------------
End of AIList Digest
********************
∂18-Sep-86 1518 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #184
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 18 Sep 86 15:17:49 PDT
Date: Thu 18 Sep 1986 11:48-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #184
To: AIList@SRI-STRIPE
AIList Digest Thursday, 18 Sep 1986 Volume 4 : Issue 184
Today's Topics:
Correction - Conference on Office Information Systems,
AI Tools - Interlisp vs. C,
Queries - NL Grammar & Unix Software,
Education - AI Schools,
AI Tools - Turbo Prolog
----------------------------------------------------------------------
Date: Wed, 17 Sep 86 14:13:52 cdt
From: preece%ccvaxa@gswd-vms.ARPA (Scott E. Preece)
Subject: Correction - Conference on Office Information Sys
> From: rba@petrus.bellcore.com (Robert B. Allen)
> Subject: Conference on Office Information Systems - Brown U.
>
>
> ACM CONFERENCE ON OFFICE INFORMATION SYSTEMS
> October 6-8, 1968, Providence, R.I.
↑
Gee, I didn't join the ACM until 1970, but I didn't think
they had invented "Office Information Systems" then...
--
scott preece
gould/csd - urbana
uucp: ihnp4!uiucdcs!ccvaxa!preece
arpa: preece@gswd-vms
------------------------------
Date: 16 Sep 86 13:07 EDT
From: Denber.wbst@Xerox.COM
Subject: Re: Reimplementing in C
"Such things are very awkward, if not impossible to express in the
typical AI languages"
Well, maybe I've been using an atypical AI language, but Interlisp-D has
all that stuff - byte I/O, streams, timers, whatever. It's real e-z to
use. Check it out.
- Michel
------------------------------
Date: Thu, 14 Aug 86 11:58 EDT
From: EDMUNDSY%northeastern.edu@CSNET-RELAY.ARPA
Subject: Looking for Production Rules for English Grammar
Does anyone know where can I find the information (or existed results) of
transforming English (or a simplified subset) grammar into production rules of
regular grammar, context-free or context sensitive grammar. For example,
Sentences --> Noun Verb Noun etc.
If anyone gets any information on that, I would appreciate if you can leave me
a pointer for those information. Thanks!! I can be contacted by any of the
following means:
NET: EDMUNDSY@NORTHEASTERN.EDU
ADD: Sy, Bon-Kiem
Northeastern University
Dept. ECE DA 409
Boston, MA 02115
Phone: (617)-437-5055
Bon Sy
------------------------------
Date: Mon, 8 Sep 86 18:32:06 edt
From: brant@linc.cis.upenn.edu (Brant A. Cheikes)
Subject: Unix Consultant references?
I'm looking for the most recent reports by the group working
on the Unix Consultant project at UC Berkeley. Does anybody
know what that is, and is there a network address to which
report requests can be sent? The ref I was given was UCB
report CSD 87/303, but I'm not sure if it's available or even
recent. Any information in this vein would be appreciated.
------------------------------
Date: 12 Sep 86 00:32:34 GMT
From: micropro!ptsfa!jeg@lll-crg.arpa (John Girard)
Subject: AI tools/products in UNIX
Greetings,
I am looking for any information I can get on Artificial Intelligence
tools and products in the UNIX environment. I will compile and publish
the results in net.ai. Please help me out with any of the following:
versions of LISP and PROLOG running in unix
expert system shells available in unix
expert system and natural language products that have been developed
in the unix environment, both available now and in R&D, especially ones
that relate to unix problem domains (sys admin, security).
Reply to: John Girard
415-823-1961
[ihnp4,dual,cbosgd,nike,qantel,bellcore]!ptsfa!jeg
P.S. Very interested in things that run on less horsepower than a SUN.
------------------------------
Date: Sat, 13 Sep 86 13:44:47 pdt
From: ucsbcsl!uncle@ucbvax.Berkeley.EDU
Subject: request for core nl system code
We are looking for a core nl system which we can tailor and
extend. There is as yet little comp.ling activity at UCSB,
so we have no local sources. We are interested in developing
a system which can be used in foreign language education, hence
we would need a system in which the "syntactic components"
are such that we could incrementally mung the system into
speaking german or french or russian without having to
redesign the system. my knowledge in this area is fuzzy
(not 'Fuzzy(tm)' etc, just fuzzy!) .
I have read a little about systems such as the Phran component of the
Wilensky et al project called unix-consultant, and i
understand that the approach taken there is susceptible
to generalization to other languages by entering a new
data-base of pattern-action pairs (i.e. an EXACT parse of
a syntactically admissable sentence is not required) Unfortunately,
Berekeley CS is not currently giving access to components of that system.
Does anyone have pointers to available code for systems
that fall into that part of the syntax-semantics spectrum?
Is it, in fact, reasonable for us to seek such a system as
a tool, or are we better advised to start with car and cdr ????
------------------------------
Date: 19 Aug 86 19:29:25 GMT
From: decvax!dartvax!kapil@ucbvax.Berkeley.EDU (Kapil Khetan)
Subject: Where can one do an off-campus Ph.D. in AI/ES
After graduating from Dartmouth, with an MS in
Computer & Information Science, I have been residing and working
in New York City.
I am interested in continuing education and think
Expert Systems is a nice field to learn more about. I took
a ten week course in which we dabbled in Prolog and M1.
If any of you know of a college in the area (Columbia,
NYU, PACE) which has something like it, or any other college
anywhere else which has an off-campus program, please hit the 'r' key.
Thank-you.
Kapil Khetan
Chemical Bank, 55 Water St., New York, NY 10041
------------------------------
Date: 25 Aug 86 18:27:08 GMT
From: ihnp4!gargoyle!sphinx!bri5@ucbvax.Berkeley.EDU (Eric Brill)
Subject: Grad Schools
Hello. I am planning on entering graduate school next year. I was wondering
what schools are considered the best in Artificial Intelligence (specifically
in language comprehension and learning). I would be especially interested
in your opinions as to which schools would be considered the top 10.
Thank you very much.
Eric Brill
ps, if there is anybody else out there interested in the above, send me mail,
and I will forward all interesting replies.
------------------------------
Date: Fri, 12 Sep 86 15:35 CDT
From: PADIN%FNALB.BITNET@WISCVM.WISC.EDU
Subject: ADVICE ON ENTERING THE AI COMMUNITY
As a newcomer to the AI arena, I am compelled to ask some
fundamentally novice (and,as such,sometimes ridiculous) sounding
questions. Nonetheless, here goes.
If one were to attempt to enter the AI field, what are the
basic requirements; what are some special requirements?
With a BS im physics, is further schooling mandatory? Are there
particular schools which I should consider or ones I should
avoid? Are there books which I MUST read!!? As a 29 year old
with a Math and Physics background, am I hopelessly over-the-hill
for such musings to become a reality? Are there questions which I
should be asking?
If you care to answer in private I can be reached at:
PADIN@FNALB.BITNET
------------------------------
Date: 2 Sep 86 21:44:00 GMT
From: pyrnj!mirror!prism!mattj@CAIP.RUTGERS.EDU
Subject: Re: Grad Schools
Eric Brill:
Here is my own personal ranking of general AI programs:
Stanford
MIT
Carnegie-Mellon
UIllinois@Urbana
URochester
Also good: UMaryland, Johns Hopkins, UMass@Amherst, ... can't think now.
[...]
- Matthew Jensen
------------------------------
Date: 6 Sep 86 02:52:06 GMT
From: ubc-vision!ubc-cs!andrews@UW-BEAVER.ARPA (Jamie Andrews)
Subject: Re: Grad Schools (Rochester?)
I've heard that Rochester has quite a good AI / logic
programming program, and it definitely has some good people...
but can anyone tell me what it's like to LIVE in Rochester?
Or is the campus far enough from Rochester that it doesn't
matter? Please respond (r or R) to me rather than to the net.
Adv(merci)ance,
--Jamie.
...!seismo!ubc-vision!ubc-cs!andrews
"Hundred million bottles washed up on the shore"
------------------------------
Date: 3 Sep 86 21:20:31 GMT
From: mcvax!prlb2!lln-cs!pv@seismo.css.gov (Patrick Vandamme)
Subject: Bug in Turbo Prolog
I am testing the `famous' Turbo Prolog Software and, after all the good things
that I heard about it, I was very surprised at having problems with the first
large program I tried. I give here this program. It finds all the relations
between a person and his family. But for some people, it answers with a lot
of strange characters. I think there must be a dangling pointer somewhere.
Note that this happens only with large programs !
Have someone the same result ?
(for the stange characters, try with "veronique").
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/*
+-----------------------------------------------------+
| Programme de gestion d'une base de donnees |
| de relations familiales. |
+-----------------------------------------------------+
P. Vandamme - Unite Info - UCL - Aout 1986
*/
[Deleted due to length. See following message for an explanation
of the problem. -- KIL]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--
Patrick Vandamme
Unite d'Informatique UUCP : (prlb2)lln-cs!pv
Universite Catholique de Louvain Phone: +32 10 43 24 15
Place Sainte-Barbe, 2 Telex: 59037 UCL B
B-1348 Louvain-La-Neuve Eurokom: Patrick Vandamme UCL
Belgium Fax : +32 10 41 56 47
------------------------------
Date: 8 Sep 86 23:08:05 GMT
From: clyde!cbatt!cbuxc!cbuxb!cbrma!clh@CAIP.RUTGERS.EDU (C.Harting)
Subject: Re: Bug in Turbo Prolog
I purchased Turbo Prolog Friday night, and immediately tried to compile the
GeoBase program on my Tandy 1000 (384K). I could not even create a .OBJ file
on my machine, so I compiled it on a 640K AT&T PC6300. Caveat No. 1: large
programs need large amounts of memory. I compiled Patrick's "programme de
gestion" to disk and it ran flawlessly (I think -- this is my first lesson in
French!). BUT when compiled to memory, I got the same errors as Patrick.
Caveat No. 2: compile large programs to disk and run standalone. And, Caveat
No. 3: leave out as many memory-resident programs as you can stand when
booting the machine to run Turbo Prolog.
'Nuff said?
===============================================================================
Chris Harting "Many are cold, few are frozen."
AT&T Network Systems Columbus, Ohio
The Path (?!?): cbosgd!cbrma!clh
------------------------------
Date: 11 Sep 86 18:40:05 GMT
From: john@unix.macc.wisc.edu (John Jacobsen)
Subject: Re: Re: Bug in Turbo Prolog
> Xref: uwmacc net.ai:1941 net.lang.prolog:528
> Summary: How to get around it.
I got the "Programme de Gestion de Base de Donnees" to work fine... on an
AT with a meg of memory. I think Patrick Vandamme just ran out of memory,
cause his code is immaculate.
John E. Jacobsen
University of Wisconsin -- Madison Academic Computing Center
------------------------------
Date: Tue, 16 Sep 86 17:26 PDT
From: jan cornish <cornish@RUSSIAN.SPA.Symbohics.COM>
Subject: Turbo Prolog
I've heard some chilling things about Turbo Prolog. Such as
1) The programmer must not only declare each predicate, but also whether
each parameter to the predicate (not correct terminology) is input
or output. This means you can't write relational predicates like
grandfather.
2) The backtracking is not standard.
3) "You can do any thing in Turbo Prolog that you can do in Turbo Pascal"
I want to hear from the LP community on Turbo Prolog as to it's ultimate
merit. Something beyond the dismissive flames.
Thanks in advance,
Jan
------------------------------
End of AIList Digest
********************
∂18-Sep-86 1931 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #185
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 18 Sep 86 19:31:21 PDT
Date: Thu 18 Sep 1986 12:40-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #185
To: AIList@SRI-STRIPE
AIList Digest Friday, 19 Sep 1986 Volume 4 : Issue 185
Today's Topics:
Query - Connectionist References,
Cognitive Psychology - Connectionist Learning,
Review - Notes on AAAI '86
----------------------------------------------------------------------
Date: 21 Aug 86 12:11:25 GMT
From: lepine@istg.dec.com@decwrl.dec.com (Normand Lepine 225-6715)
Subject: Connectionist references
I am interested in learning about the connectionist model and would appreciate
any pointers to papers, texts, etc. on the subject. Please mail references to
me and I will compile and post a bibliography to the net.
Thanks for your help,
Normand Lepine
uucp: ...!decwrl!cons.dec.com!lepine
ARPA: lepine@cons.dec.com
lepine%cons.dec.com@decwrl.dec.com (without domain servers)
------------------------------
Date: 22 Aug 86 12:04:30 GMT
From: mcvax!ukc!reading!brueer!ckennedy@seismo.css.gov (C.M.Kennedy )
Subject: Re: Connectionist Expert System Learning
The following is a list of the useful replies received so far:
Date: Wed, 30 Jul 86 8:56:08 BST
From: Ronan Reilly <rreilly%euroies@reading.ac.uk>
Sender: rreilly%euroies@reading.ac.uk
Subject: Re: Connectionist Approaches To Expert System Learning
Hi,
What you're looking for, effectively, are attempts to implement
production systems within a connectionist framework. Researchers
are making progress, slowly but surely, in that direction. The
most recent paper I've come across in thge area is:
Touretzky, D. S. & Hinton, G. E. (1985). Symbols among the neurons
details of a connectionist inference architecture. In
Proceedings IJCAI '85, Los Angeles.
I've a copy of this somewhere. So if the IJCAI proceedings don't come
to hand, I'll post it onto you.
There are two books which are due to be published this year, and they
are set to be the standard reference books for the area:
Rumelhart, D. E. & McClelland, J. L. (1986). Parallel distributed
processing: Explorations in the microstructure of cognition.
Vol. 1: Foundations. Cambridge, MA: Bradford Books.
Rumelhart, D. E. & McClelland, J. L. (1986). Parallel distributed
processing: Explorations in the microstructure of cognition.
Vol. 2: Applications. Cambridge, MA: Bradford Books.
Another good source of information on the localist school of
connectionism is the University of Rochester technical report series.
They have one report which lists all their recent connectionist
reports. The address to write to is:
Computer Science Department
The University of Rochester
Rochester, NY 14627
USA
I've implemented a version of the Rochester ISCON simulator in
Salford Lisp on our Prime 750. The simulator is a flexible system
for building and testing connectionist models. You're welcome to
a copy of it. Salford Lisp is a Maclisp variant.
Regards,
Ronan
...mcvax!euroies!rreilly
Date: Sat, 2 Aug 86 09:33:46 PDT
From: Mike Mozer <mozer%ics.ucsd.edu@reading.ac.uk>
Subject: Re: Connectionist Approaches To Expert System Learning
I've just finished a connectionist expert system paper, which I'd be glad
to send you if you're interested (need an address, though).
Here's the abstract:
RAMBOT: A connectionist expert system that learns by example
Expert systems seem to be quite the rage in Artificial Intelligence, but
getting expert knowledge into these systems is a difficult problem. One
solution would be to endow the systems with powerful learning procedures
which could discover appropriate behaviors by observing an expert in action.
A promising source of such learning procedures
can be found in recent work on connectionist networks, that is, massively
parallel networks of simple processing elements. In this paper, I discuss a
Connectionist expert system that learns to play a simple video game by
observing a human player. The game, Robots, is played on a two-dimensional
board containing the player and a number of computer-controlled robots. The
object of the game is for the player to move around the board in a
manner that will force all of the robots to collide with one another
before any robot is able to catch the player. The connectionist system
learns to associate observed situations on the board with observed
moves. It is capable not only of replicating the performance of the
human player, but of learning generalizations that apply to novel
situations.
Mike Mozer
mozer@nprdc.arpa
Date: Fri, 8 Aug 86 18:53:57 edt
From: Tom Frauenhofer <tfra%ur-tut@reading.ac.uk>
Subject: Re: Connectionist Approaches To Expert System Learning
Organization: U. of Rochester Computing Center
Catriona,
I am (slightly) familiar with a thesis by Gary Cotrell of the U of R here
that dealt with a connectionist approach to language understanding. I believe
he worked closely with a psychologist to figure out how people understand
language and words, and then tried to model the behavior in a connectionist
framework. You should be able to get a copy of the thesis from the Computer
Science Department here. It's not expert systems, but it is fascinating.
- Tom Frauenhofer
...!seismo!rochester!ur-tut!tfra
From sandon@ai.wisc.edu Sat Aug 9 17:25:29 1986
Date: Fri, 8 Aug 86 11:38:43 CDT
From: Pete Sandon <sandon%ai.wisc.edu@reading.ac.uk>
Subject: Connectionist Learning
Hi,
You may have already received this information, but I will pass it
along anyway. Steve Gallant, at Northeastern University, has done some
work on using a modified perceptron learning algorithm for expert
system knowledge acquisition. He has written a number of tech reports
in the last few years. His email address is: sig@northeastern.csnet.
His postal address is: Steve Gallant
College of Computer Science
Boston, MA. 02115
--Pete Sandon
------------------------------
πDate: 17 Aug 86 22:08:30 GMT
From: ix133@sdcc6.ucsd.EDU (Catherine L. Harris)
Subject: Q: How can structure be learned? A: PDP
[Excerpted from the NL-KR Digest by Laws@SRI-STRIPE.]
[Forwarded from USENET net.nlang]
[... The following portion discusses connectionist learning. -- KIL]
One Alternative to the Endogenous Structure View
Jeffrey Goldberg says (in an immediately preceding article) [in net.nlang -B],
> Chomsky has set him self up asking the question: "How can children,
> given a finite amount of input, learn a language?" The only answer
> could be that children are equipped with a large portion of language to
> begin with. If something is innate than it will show up in all
> languages (a universal), and if something is unlearnable then it, too,
> must be innate (and therefore universal).
The important idea behind the nativist and language-modularity
hypotheses are that language structure is too complex, time is too
short, and the form of the input data (i.e., parent's speech to
children) is too degenerate for the target grammar to be learned.
Several people (e.g., Steven Pinker of MIT) have bolstered this
argument with formal "learnability" analyses: you make an estimate of
the power of the learning mechanism, make assumptions about factors in
the learning situation (e.g., no negative feedback) and then
mathematically prove that a given grammar (a transformational grammar,
or a lexical functional grammar, or whatever) is unlearnable.
My problem with these analyses -- and with nativist assumptions in
general -- is that they aren't considering a type of learning mechanism
that may be powerful enough to learn something as complex as a grammar,
even under the supposedly impoverished learning environment a child
encounters. The mechanism is what Rumelhart and McClelland (of UCSD)
call the PDP approach (see their just-released from MIT Press, Parallel
Distributed Processing: Explorations in the Microstructure of
Cognition).
The idea behind PDP (and other connectionist approaches to explaining
intelligent behavior) is that input from hundred/thousands/millions
of information sources jointly combine to specify a result. A
rule-governed system is, according to this approach, best represented
not by explicit rules (e.g., a set of productions or rewrite rules) but
by a large network of units: input units, internal units, and output
units. Given any set of inputs, the whole system iteratively "relaxes"
to a stable configuration (e.g., the soap bubble relaxing to
a parabola, our visual system finding one stable interpretation of
a visual illustion).
While many/most people accept the idea that constraint-satisfaction
networks may underlie phenomenon like visual perception, they are more
reluctant to to see its applications to language processing or language
acquisition. There are currently (in the Rumelhart and McClelland
work -- and I'm sure you cognitive science buffs have already rushed
to your bookstore/library!) two convincing PDP models on language,
one on sentence processing (case role assignment) and the other on
children's acquisition of past-tense morphology. While no one has yet
tried to use this approach to explain syntactic acquisition, I see this
as the next step.
For people interested in hard empirical, cross-linguistic data that
supports a connectionist, non-nativist, approach to acquisition, I
recommend *Mechanisms of Language Acquisition*, Brain MacWhinney Ed.,
in press.
I realize I rushed so fast over the explanation of what PDP is that
people who haven't heard about it before may be lost. I'd like to see
a discussion on this -- perhaps other people can talk about the brand
of connectionism they're encountering at their school/research/job and
what they think its benefits and limitations are -- in
explaining the psycholinguistic facts or just in general.
Cathy Harris "Sweating it out on the reaction time floor -- what,
when you could be in that ole armchair theo-- ? Never mind;
it's only til 1990!"
------------------------------
Date: 21 Aug 86 11:28:53 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.Berkeley.EDU (B.KORT)
Subject: Notes on AAAI '86
Notes on AAAI
Barry Kort
Abstract
The Fifth Annual AAAI Conference on Artificial Intelligence
was held August 11-15 at the Philadelphia Civic Center.
These notes record the author's personal impressions of the
state of AI, and the business prospects for AI technology.
The views expressed are those of the author and do not
necessarily reflect the perspective or intentions of other
individuals or organizations.
* * *
The American Association for Artificial Intelligence held
its Fifth Annual Conference during the week of August 11,
1986, at the Philadelphia Civic Center.
Approximately 5000 attendees were treated to the latest
results of this fast growing field. An extensive program of
tutorials enabled the naive beginner and technical-
professional alike to rise to a common baseline of
understanding. Research and Science Sessions concentrated on
the theoretical underpinnings, while the complementary
Engineering Sessions focused on reduction of theory to
practice.
Dr. Herbert Schorr of IBM delivered the Keynote Address.
His message was simple and straightforward: AI is here
today, it's real, and it works. The exhibit floor was a sea
of high-end workstations, running flashy applications
ranging from CAT scan imagery to automated fault diagnosis,
to automated reasoning, to 3-D scene animation, to
iconographic model-based reasoning. Symbolics, TI, Xerox,
Digital, HP, Sun, and other vendors exhibited state of the
art hardware, while Intellicorp, Teknowledge, Inference,
Carnegie-Mellon Group, and other software houses offered
knowledge engineering power tools that make short work of
automated reasoning.
Knowledge representation schema include the ubiquitous tree,
as well as animated iconographic models of dynamic systems.
Inductive and deductive reasoning and goal-directed logic
appear in the guise of forward and backward chaining
algorithms which seek the desired chain of nodes linking
premiss to predicted conclusion or hypothesis to observed
symptoms. Such schema are especially well adapted to
diagnosis of ills, be it human ailment or machine
malfunction.
Natural Language understanding remains a hard problem, due
to the inscrutable ambiguity of most human-generated
utterances. Nevertheless, silicon can diagram sentences as
well as a precocious fifth grader. In limited domain
vocabularies, the semantic content of such diagrammatic
representations can be reliably extracted.
Robotics and vision remain challenging fields, but advances
in parallel architectures may clear the way for notable
progress in scene recognition.
Qualitative reasoning, model-based reasoning, and reasoning
by analogy still require substantial human guidance, perhaps
because of the difficulty of implementing the interdomain
pattern recognition which humans know as analogy, metaphor,
and parable.
Interesting philosophical questions abound when AI moves
into the fields of automated advisors and agents. Such
systems require the introduction of Value Systems, which may
or may not conflict with individual preferences for
benevolent ethics or hard-nosed business pragmatics. One
speaker chose the provocative title, "Can Machines Be
Intelligent If They Don't Give a Damn?" We may be on the
threshold of Artificial Intelligence, but we have a long way
to go before we arrive at Artificial Wisdom. Nevertheless,
some progress is being made in reducing to practice such
esoteric concepts as Theories of Equity and Justice, leading
to the possibility of unbiased Jurisprudence.
AI goes hand in hand with Theories of Learning and
Instruction, and the field appears to be paying dividends in
the art and practice of knowledge exchange, following the
π strategy first suggested by Socrates some 2500 years ago.
The dialogue format abounds, and mixed initiative dialogues
seem to capture the essence of mutual teaching and
mirroring. Perhaps sanity can be turned into an art form
and a science.
Belief Revision and Truth Maintenance enable systems to
unravel confusion caused by the injection of mutually
inconsistent inputs. Nobody's fool, these systems let the
user know that there's a fib in there somewhere.
Psychology of computers becomes an issue, and the Silicon
Syndrome of Neuroses can be detected whenever the machines
are not taught how to think straight. Machines are already
sapient. Soon they will acquire sentience, and maybe even
free will (nothing more than a random number generator
coupled with a value system). Perhaps by the end of the
Millenium (just 14 years away), the planet will see its
first Artificial Sentient Being. Perhaps Von Neumann knew
what he was talking about when he wrote his cryptic volume
entitled, On the Theory of Self-Reproducing Automata.
There were no Cybernauts in Philadelphia this year, but many
of the piece parts were in evidence. Perhaps it is just a
matter of time until the Golem takes its first step.
In the mean time, we have entered the era of the Competent
System, somewhat short on world-class expertise, but able to
hold it's own in today's corporate culture. It learns about
as fast as its human counterpart, and is infinitely
clonable.
Once upon a time it was felt that machines should work and
people should think. Now that machines can think, perhaps
people can take more time to enjoy the state of being called
Life.
* * *
Lincroft, NJ
August 17, 1986
------------------------------
End of AIList Digest
********************
∂18-Sep-86 2245 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #186
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 18 Sep 86 22:45:22 PDT
Date: Thu 18 Sep 1986 12:55-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #186
To: AIList@SRI-STRIPE
AIList Digest Friday, 19 Sep 1986 Volume 4 : Issue 186
Today's Topics:
Cognitive Science - Commentaries on the State of AI
----------------------------------------------------------------------
Date: 29 Aug 86 01:58:30 GMT
From: hsgj@tcgould.tn.cornell.edu (Mr. Barbecue)
Subject: Re: Notes on AAAI '86
(not really a followup article, more of a commentary)
I find it very interesting that there is so much excitement generated over
parallel processing computer systems by the AI community. Interesting in
that the problems of AI (the intractability of: language, vision, and general
cognition to name a few) are not anywhere near limited by computational
power but by our lack of understanding. If somebody managed to create a
truely intelligent system, I think we would have heard about it by now,
even if the program took a month to run. Fact of the matter is that our
knowledge of such problems is minimal. Attempts to solve them leads to
researchers banging their heads against a very hard wall, indeed. So what
is happening? The field that was once A.I. is very quickly headed back to
it's origins in computer science and is producing "Expert Systems" by the
droves. The problem isn't that they aren't useful, but rather that they
are being touted as the A.I., and true insights into actual human thinking
are still rare (if not non-existant).
Has everybody given up? I doubt it. However, it seems that economic reality
has set in. People are forced to show practical systems with everyday appli-
cations. Financers can't understand why we would be overjoyed if we could
develop a system that learns like a baby, and so all the money is being
siphoned away and into robotics, Expert Systems, and even spelling checkers!
(no, I don't think that welding cars together requires a great deal of true
intelligence, though technically it may be great)
So what is one to do? Go into cog-psych? At least psychologists are working
on the fundamental problems that AI started, but many seem to be grasping at
straws, trying to find a simple solution (i.e., familly resemblance, primary
attribute analysis, etc.)
What seems to be lacking is a cogent combination of theories. Some attempts
have been made, but these authors basically punt on the issue, stating
like "none of the above theories adequately explain the observed phenomena,
perhaps the solution is a combination of current hypothesis". Very good, now
lets do that research and see if this is true!
My opinion? Well, some current work has dealt with computer nervous systems,
(Science --sometime this summer). This is similar in form to the hypercube
systems but the theory seems different. Really the work is towards computer
neurons. Distributed systems in which each element contributes a little to
the final result. Signals are not binary, but graded. They combine with other
signals from various sources and form an output. Again, this could be done
with a linear machine that hold partial results. But, I'm not suggesting that
this alone is a solution, it's just interesting. My real opinion is that
without "bringing baby up" so to speak, we won't get much accomplished. The
ultimate system will have to be able to reach out, grasp (whether visually or
physically, or whatever) and sense it's world around it in a rich manner. It
will have to be malleable, but still have certain guidelines built in. It
must truely learn, forming a myriad of connections with past experiences and
thoughts. In sum, it will have to be a living animal (though made of sand..)
Yes, I do think that you need the full range of systems to create a truely
intelligent system. Hellen Keller still had touch. She could feel vibrations,
and she could use this information to create a world that was probably
perceptually much different than ours. But, she had true intelligence.
(I realize that the semantics of all these words and phrases are highly
debated, you know what I'm talking, so don't try to be difficult!) :)
Well, that's enough for a day.
Ted Inoue.
Cornell
--
ARPA: hsgj%vax2.ccs.cornell.edu@cu-arpa.cs.cornell.edu
UUCP: ihnp4!cornell!batcomputer!hsgj BITNET: hsgj@cornella
------------------------------
Date: 1 Sep 86 10:25:25 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.Berkeley.EDU (B.KORT)
Subject: Re: Notes on AAAI '86
I appreciated Ted Inoue's commentary on the State of AI. I especially
agree with his point that a cogent combination of theories is needed.
My own betting card favors the theories of Piaget on learning, coupled
with the modern animated-graphic mixed-initiative dialogues that merge
the Socratic-style dialectic with inexpensive PC's. See for instance
the Mind Mirror by Electronic Arts. It's a flashy example of the clever
integration of Cognitive Psychology, Mixed Initiative Dialogues, Color
Animated Graphics, and the Software/Mindware Exchange. Such illustrations
of the imagery in the Mind's Eye can breathe new life into the relationship
between silicon systems and their carbon-based friends.
Barry Kort
hounx!kort
------------------------------
Date: 4 Sep 86 21:39:37 GMT
From: tektronix!orca!tekecs!mikes@ucbvax.Berkeley.EDU (Michael Sellers)
Subject: transition from AI to Cognitive Science (was: Re: Notes on
AAAI '86)
> I find it very interesting that there is so much excitement generated over
> parallel processing computer systems by the AI community. Interesting in
> that the problems of AI (the intractability of: language, vision, and general
> cognition to name a few) are not anywhere near limited by computational
> power but by our lack of understanding. [...]
> The field that was once A.I. is very quickly headed back to
> it's origins in computer science and is producing "Expert Systems" by the
> droves. The problem isn't that they aren't useful, but rather that they
> are being touted as the A.I., and true insights into actual human thinking
> are still rare (if not non-existant).
Inordinate amounts of hype have long been a problem in AI; the only difference
now is that there is actually a small something there (i.e. knowledge based
systems), so the hype is rising to truly unbelievable heights. I don't know
that AI is returning to its roots in computer science, probably there is just
more emphasis on the area(s) where something actually *works* right now.
> Has everybody given up? I doubt it. However, it seems that economic reality
> has set in. People are forced to show practical systems with everyday appli-
> cations.
Good points. You should check out the book "The AI Business" by ...rats, it
escapes me (possibly Winston or McCarthy?). I think it was published in late
'84 or early '85, and makes the same kinds of points that you're making here,
talking about the hype, the history, and the current state of the art and the
business.
> So what is one to do? Go into cog-psych? At least psychologists are working
> on the fundamental problems that AI started, but many seem to be grasping at
> straws, trying to find a simple solution (i.e., familly resemblance, primary
> attribute analysis, etc.)
The Grass is Always Greener. I started out going into neurophysiology, then
switched to cog psych because the neuro research is still at a lower level than
I wanted, and then became disillusioned because all of the psych work being
done seemed to be either super low-level or infeasable to test empirically.
So, I started looking into computers, longing to get into the world of AI.
Luckily, I stopped before I got to the point you are at now, and found
something better (no, besides Amway :-)...
> What seems to be lacking is a cogent combination of theories. Some attempts
> have been made, but these authors basically punt on the issue, stating
> like "none of the above theories adequately explain the observed phenomena,
> perhaps the solution is a combination of current hypothesis". Very good, now
> lets do that research and see if this is true!
And this is exactly what is happening in the new field of Cognitive Science.
While there is still no "cogent combination of theories", things are beginning
to coalesce. (Pylyshyn described the current state of the field as Physics
searching for its Newton. Everyone agrees that the field needs a Newton to
bring it all together, and everyone thinks that he or she is probably that
person. The problem is, no one else agrees with you, except maybe your own
grad students.) Cog sci is still emerging as a separate field, even though
its beginnings can probably be pegged as being in the late '70s or early '80s.
It is taking material, paradigms, and techniques from AI, neurology, cog psych,
anthropology, linguistics, and several other fields, and forming a new field
dedicated to the study of cognition in general. This does not mean that
cognition should be looked at in a vacuum (as is to some degree the case with
AI), but that it can and should be examined in both natural and artificial
contexts, allowing for the difference between them. It can and should take
into account all types and levels of cognition, from the low-level neural
processing to the highly plastic levels of linguistic and social cognitive
interaction, researching and applying these areas in artificial settings
as it becomes feasable.
> [...] My real opinion is that
> without "bringing baby up" so to speak, we won't get much accomplished. The
> ultimate system will have to be able to reach out, grasp (whether visually or
> physically, or whatever) and sense it's world around it in a rich manner. It
> will have to be malleable, but still have certain guidelines built in. It
> must truely learn, forming a myriad of connections with past experiences and
> thoughts. In sum, it will have to be a living animal (though made of sand..)
This is one possibility, though not the only one. Certainly an artificially
cogitating system without many of the abilities you mention would be different
from us, in that its primary needs (food, shelter, sensory input) would not
be the same. This does not make these things a requirement, however. If we
would wish to build an artificial cogitator that had roughly the same sort of
world view as we have, then we probably would have to give it some way of
directly interacting with its environment through the use of sensors and
effectors of some sort.
I suggest that you find and peruse the last 5 or 6 years of the journal
Cognitive Science, put out by the Cognitive Science Society. Most of the
things that have been written in there are still fairly up-to-date, as the
field is still reaching "critical mass" in terms of theoretical quantity
and quality (an article by Norman, "Twelve Issues for Cognitive Science"
from 1980 in this journal (not sure which issue) discusses many of the things
you are talking about here).
Let's hear more on this subject!
> Ted Inoue.
> Cornell
--
Mike Sellers
UUCP: {...your spinal column here...}!tektronix!tekecs!mikes
INNING: 1 2 3 4 5 6 7 8 9 TOTAL
IDEALISTS 0 0 0 0 0 0 0 0 0 1
REALISTS 1 1 0 4 3 1 2 0 2 0
------------------------------
Date: 6 Sep 86 19:09:31 GMT
From: craig@think.com (Craig Stanfill)
Subject: Re: transition from AI to Cognitive Science (was: Re: Notes
on AAAI '86)
> I find it very interesting that there is so much excitement generated over
> parallel processing computer systems by the AI community. Interesting in
> that the problems of AI (the intractability of: language, vision, and general
> cognition to name a few) are not anywhere near limited by computational
> power but by our lack of understanding. [...]
For the last year, I have been working on AI on the Connection Machine,
which is a massively parallel computer. Depending on the application,
the CM is between 100 and 1000 times faster than a Symbolics 36xx. I
have performed some experiments on models of reasoning from memory
(Memory Based Reasoning, Stannfill and Waltz, TMC Technical Report).
Some of these experiments required 5 hours on a 32,000 processor CM. I,
for one, do not consider a 500-5000 hour experiment on a Symbolics a
practical way to work.
More substantially, having a massively parallel machine changes the way
you think about writing programs. When certain operations become 1000
times faster, what you put into the inner loop of a program may change
drasticly.
------------------------------
Date: 7 Sep 86 16:46:51 GMT
From: clyde!watmath!watnot!watdragon!rggoebel@CAIP.RUTGERS.EDU
(Randy Goebel LPAIG)
Subject: Re: transition from AI to Cognitive Science (was: Re: Notes
on AAAI '86)
Mike Sellers from Tektronix in Wilsonville, Oregon writes:
| Inordinate amounts of hype have long been a problem in AI; the only difference
| now is that there is actually a small something there (i.e. knowledge based
| systems), so the hype is rising to truly unbelievable heights. I don't know
| that AI is returning to its roots in computer science, probably there is just
| more emphasis on the area(s) where something actually *works* right now.
I would like to remind all that don't know or have forgotten that the notion
of a rational artifact as digitial computer does have its roots in
computing, but the more general notion of intelligent artifact has concerned
scientists and philosophers much longer than the lifetime of the digital
computer. John Haugeland's book ``AI: the very idea'' would be good reading
for those who aren't aware that there is a pre-Dartmouth history of ``AI.''
Randy Goebel
U. of Waterloo
------------------------------
End of AIList Digest
********************
∂19-Sep-86 1549 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #187
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 19 Sep 86 15:49:41 PDT
Date: Fri 19 Sep 1986 11:31-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #187
To: AIList@SRI-STRIPE
AIList Digest Friday, 19 Sep 1986 Volume 4 : Issue 187
Today's Topics:
Queries - Natural Language DB Interface & NL Generation &
Production Systems & Facial Recognition & Smalltalk & Symbolics CAD &
Lisp Machine News & MACSYMA & San Diego Speakers Wanted
----------------------------------------------------------------------
Date: 16 Sep 86 20:05:31 GMT
From: mnetor!utzoo!utcs!bnr-vpa!bnr-di!yali@seismo.css.gov
Subject: natural language DB interface
Has anyone out there any experience with
the Swan* natural language database interface
put out by Natural Language Products of Berkeley?
This system was demo-ed at AAAI this August.
I am primarily interested in the system's
ability to talk to "different databases
associated with different DBMS's"
simultaneously (quoting an information sheet
put out by NLP).
How flexible is it and how easy is it
to adapt to new domains?
======================================================
Yawar Ali
{the world}!watmath!utcsri!utcs!bnr-vpa!bnr-di!yali
======================================================
* Swan is an unregistered trademark of NLP
------------------------------
Date: Thu, 18 Sep 86 16:34:47 edt
From: lb0q@andrew.cmu.edu (Leslie Burkholder)
Subject: natural language generation
Has work been done on the problem of generating relatively idiomatic English
from sentences written in a language for first-order predicate logic?
Any pointers would be appreciated.
Leslie Burkholder
lb0q@andrew.cmu.edu
------------------------------
Date: Thu, 18 Sep 1986 17:10 EDT
From: LIN@XX.LCS.MIT.EDU
Subject: queries about expert systems
Maybe some AI guru out there can help with the following questions:
1. Production systems are the implementation of many expert systems.
In what other forms are "expert systems" implemented?
[I use the term "expert system" to describe the codification of any
process that people use to reason, plan, or make decisions as a set of
computer rules, involving a detailed description of the precise
thought processes used. If you have a better description, please
share it.]
2. A production system is in essence a set of rules that state that
"IF X occurs, THEN take action Y." System designers must anticipate
the set of "X" that can occur. What if something happens that is not
anticipated in the specified set of "X"? I assert that the most
common result in such cases is that nothing happens. Am I right,
wrong, or off the map?
Thanks.
Herb Lin
------------------------------
Date: 11 Sep 86 20:42:14 GMT
From: ihnp4!islenet!humu!uhmanoa!aloha1!ryan@ucbvax.Berkeley.EDU (ryan)
Subject: querying a data base using an inference engine
This is a sort of banner letting the rest of the world know that we at
the Artificial Intelligence Lab at the University of Hawaii are currently
looking at the problem of querying a database using AI techniques. We will be
using a natural language front end for querying the database. We will appretiate
any information from anyone working on or interested in the same.
my address is
Paul Ryan
...{dual,vortex,ihnp4}!islenet!aloha1!ryan
...nosvax!humu!islenet!aloha1!ryan
------------------------------
Date: Thu, 18 Sep 86 18:55:43 edt
From: philabs!micomvax!peters@seismo.CSS.GOV
Subject: Computer Vision
We are starting a project related to automatic classification of facial
features from photographs. If anyone out there has any info/references
related to this area please let me hear from you.
email: !philabs!micomvax!peters
mail: Peter Srulovicz
Philips Information Systems
600 Dr. Philips Blvd
St. Laurent Quebec
Canada H4M-2S9
------------------------------
Date: 16 Sep 86 01:22:57 GMT
From: whuxcc!lcuxlm!akgua!gatech!gitpyr!krubin@bellcore.com
Subject: Smalltalk as an AI research tool?
I am currently working on an AI project where we are
using Smalltalk-80 as our implementation language. Are there
others who have used Smalltalk to do serious AI work? If so,
and you can talk about what you have done, please respond. I
would be interested in learning how well suited the language
is for serious AI work.
We have plans to implement an (Intelligent Operator
Assistant) using an IBM PC-AT running a version of Digitalk
Incorporated's Smalltalk/V. Any comments on this software
would also be helpful (especially speed information!).
Kenneth S. Rubin (404) 894-4318
Center for Man-Machine Systems Research
School of Industrial and Systems Engineering
Georgia Institute of Technology
Post Office Box 35826
Atlanta, Georgia 30332
Majoring with: School of Information and Computer Science
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!krubin
------------------------------
Date: 14 Sep 86 11:35:00 GMT
From: osiris!chandra@uxc.cso.uiuc.edu
Subject: Wanted: CAD program for Symbolics
CAD software for the Symbolics Machine
Hi,
I just got a Symbolics lisp machine. I am looking for any
Public Domain design/drafting program. Being an architect I'd
like to draw stuff on my lisp machine.
Hints, pointers, references would be appreciated.
Thanks,
navin chandra
ARPA: dchandra@athena.mit.edu
BITNET: ank%cunyvms1
------------------------------
Date: 18 Sep 86 03:29:53 GMT
From: hp-sdd!ncr-sd!milano!dave@hplabs.hp.com
Subject: Lisp Machine News?
Does anyone have or know of a zwei-based interface
to news? (If it exists, 3 to 2 it's called ZNEWS.)
Dave Bridgeland -- MCC Software Technology
ARPA: dave@mcc.arpa
UUCP: ut-sally!im4u!milano!daveb
"Things we can be proud of as Americans:
* Greatest number of citizens who have actually boarded a UFO
* Many newspapers feature "JUMBLE"
* Hourly motel rates
* Vast majority of Elvis movies made here
* Didn't just give up right away during World War II
like some countries we could mention
* Goatees & Van Dykes thought to be worn only by weenies
* Our well-behaved golf professionals
* Fabulous babes coast to coast"
------------------------------
Date: 15 Sep 86 16:17:00 GMT
From: uiucuxa!lenzini@uxc.cso.uiuc.edu
Subject: Wanted: MACSYMA info
Hi,
I have a friend in the nuclear eng. department who is currently working on
a problem in - I can't remember right now but that's not the point - anyway,
this problem involves the analytic solution of a rather complex integral
(I believe it's called Chen's (sp?) integral). A while back I heard something
about a group of programs called MACSYMA that were able to solve integrals that
were previously unsolvable. I suggested that he may want to look into the
availabiliy of MACSYMA. I would appreciate any information about these
programs - what they can and can't do, how they are used, how to purchase
(preferably with a university discount) , etc.
Thanks in advance,
Andy Lenzini
University of Illinois.
...pur-ee!uiucdcs!uiucuxa!lenzini
------------------------------
Date: 18 Sep 86 13:58 PDT
From: sigart@LOGICON.ARPA
Subject: Speakers wanted
The San Diego Special Interest Group on Artificial Intelligence
(SDSIGART) is looking for speakers for its regular monthly meetings.
We are presently looking for individuals who would like to give a
presentation on any AI topic during the January to April 1987
time-frame. We typically hold our meetings on the fourth thursday of
the month, and provide for a single presentation during the meeting.
If you anticipate being in San Diego during that time and would like to
give a presentation please contact us via E-mail at
sigart\@logicon.arpa.
We cannot provide transportation reimbursement for speakers from
outside the San Diego area, but we can provide some reimbursement of
hotel/meal expenses.
Thank You,
Bill D'Camp
------------------------------
End of AIList Digest
********************
∂19-Sep-86 1909 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #188
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 19 Sep 86 19:09:11 PDT
Date: Fri 19 Sep 1986 13:28-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #188
To: AIList@SRI-STRIPE
AIList Digest Saturday, 20 Sep 1986 Volume 4 : Issue 188
Today's Topics:
Education - AI Grad Schools,
Philosophy - Discussion of AI and Associative Memory,
AI Tools - Turbo Prolog
----------------------------------------------------------------------
Date: 12 Sep 86 20:39:56 GMT
From: ihnp4!gargoyle!sphinx!bri5@ucbvax.Berkeley.EDU (Eric Brill)
Subject: AI Grad Schools
A few weeks ago, I posted a request for info on good graduate schools
for AI. I got so many requests to forward the replies I got, that I
decided to just post a summary to the net. So here it is:
Almost everybody seemed to believe that the top 4 schools are MIT,
CMU, Stanford and Yale (not necessarily in that order).
Other schools which got at least 2 votes for being in the top 10 were
Toronto, Illinois(Urbana), UMass(Amherst), Berkeley, UCLA, UCIrvine,
UTexas(Austin).
Other schools which got one vote for being in the top 10 were
URochester, UCSD, Syracuse and Duke.
------------------------------
Date: Tue, 09 Sep 86 12:00:00 GMT+2
From: H29%DHDURZ2.BITNET@WISCVM.WISC.EDU
Subject: AI-discussion
In the last AI-lists there has been a discussion about the possibilities
of intelligent machines.
I am going to add some arguments I missed in the discussion.
1. First is to claim, that there are a lot of cognitive functions of
man which can be simulated by the computer. But one problem is, that
up to now these different functions are not integrated in one machine
or superprogram to my kwowledge.
2. There is the phenomenon of intentionality amd motivation in man that
finds no direct correspondent phenomenon in the computer.
3. Man's neuronal processing is more analogue than digital in spite of
the fact that neurons can only have two states.
Man's organisation of memory is rather associative than categorial.
[Neurons are not two-state devices! Even if we ignore chemical and
physiological memory correlates and the growth and decay of synapses,
there are the analog or temporal effects of potential buildup and the
fact that neurons often transmit information via firing rates rather
than single pulses. Neurons are nonlinear but hardly bistable. -- KIL]
Let me elaborate upon these points:
Point 1: Konrad Lorenz assumes a phenomenon he called " fulguration" for
systems. This means in the end nothing more than: The whole is more
than the sum of parts. If you merge all possible functions a
computer can do to simulate human abilities, you will get higher
functions which transgress the sum of all lower functions.
You may once get a function like consciousness or even selfconscious-
ness. If you define self as the man's knowledge of himself: his
qualities, abilities, his existence. I see no general problem to feed
this knowledge to a computer.
Real "understanding" of natural language however needs not only lingui-
stic competence but also sensory processing and recognition abilities
(visual, acoustical). Language normally refers to objects which we
first experience by sensory input and then name it. The construct-
ivistic theory of human learning of language by Paul Lorenzen und
O. Schwemmer (Erlanger Schule) assumes a "demonstration act" (Zeige-
handlung) constituting a fundamental element of man (child) learning
language. Without this empirical fundament of language you will never
leave the hermeneutic circle, which drove former philosphers into
despair.
Point 2.:
One difference between man and computer is that man needs food and
computers need electricity and further on the computer doesn't cry
when somebody is going to pull his plug.
Nevertheless this can be made: A computer,a robot that attacks every-
body by weapon, who tries to pull his plug. But who has an interest
to construct such a machine? To living organisms made by evolution
is given the primary motivation of self-preservation. This is the
natural basis of intentionality. Only the implementation of intentionality,
motivation, goals and needs can create a machine that deserves the name
"intelligent". It is intelligent by the way it reaches "his" goals.
Implementation of "meaning" needs the ability of sensory perception and
recognition, linguistical competence and understanding, having or
simulating intentions. To know the meaning of an object means to
understand the function of this object for man in a means-end relation
within his living context. It means to realize for which goals or needs
the "object" can be used.
Point 3.:
Analogue information processing may be totally simulated by digitital
processing or may be not. Man's associative organization of memory,
however needs storage and retrieval mechanism other than those now
available or used by computers.
I have heard that some scientists try to simulate associative memory
organization in the states, but I have no further information about
that. (Perhaps somebody can give me information or references.
Thanks in advance!).
[Geoffrey E. Hinton and James A. Anderson (eds.), Parallel Models
of Associative Memory, Lawrence Erlbaum Associates, Inc., Hillsdale
NJ. Dr. Hinton is with the Applied Psychology Unit, Cambridge England.
-- KIL]
Scientists working on AI should have an attitude I call "critical opti-
mism". This means being critical,see the problems and not being euphoric,
that all problems can be solved in the next ten years. On the other hand
it means not to assume any problem as unsolvable but to be optimistic,
that the scientific community will solve problems step by step, one
after the other how long it will ever last.
Finally let me - being a psychologist - state some provocative hypotheses:
The belief, that man's cognitive or intelligent abilities including
having intentions will never be reached by a machine, is founded in the
conscious or unconscious assumption of man's godlike or godmade
uniqueness, which is supported by the religious tradition of our
culture. It needs a lot of self-reflection, courage and consciousness
about one's own existential fears to overcome the need of being
unique.
I would claim, that the conviction mentioned above however philosphical
or sophisticated it may be justified, is only the "RATIONALIZATION" (in
the psychoanalytic meaning of the word) of understandable but
irrational and normally unconscious existential fears and need of human
being.
PETER PIRRON MAIL ADDRESS: <H29@DHDURZ2.BIYNET>
Psychologisches Institut
Hauptstrasse 49-53
D-6900 Heidelberg
Western Germany
------------------------------
Date: Thu 18 Sep 86 20:04:31-CDT
From: CS.VANSICKLE@R20.UTEXAS.EDU
Subject: What's wrong with Turbo Prolog
1. Is Borland's Turbo Prolog a superset of the Clocksin &
Mellish (C & M) standard?
On the back cover of Turbo Prolog's manual is the description
"A complete Prolog incremental compiler supporting a large
superset of Clocksin & Mellish Edinburgh standard Prolog."
This statement is not true. On page 127 the manual says
"Turbo Prolog . . . contains virtually all the features
described in Programming in Prolog by Clocksin and Mellish."
If you read "virtually" as "about 20% of" then this statement
is true. Turbo Prolog does use Edinburgh syntax, that is,
:- for "if" in rules,
capitalized names for variables,
lower case names for symbols,
square brackets for delimiting lists, and
| between the head and tail of a list.
Almost all the Clocksin & Mellish predicates have different
names, different arguments, or are missing entirely from Turbo
Prolog. For example, "var" is "free," and "get0" is
"readchar." Differences in predicate names and arguments are
tolerable, and can be handled by a simple conversion program
or by substitutions using an editor. They could also be
handled by adding rules that define the C & M predicates in
terms of Turbo Prolog predicates, for example,
var(X):-free(X).
These kinds of differences are acceptable in different
implementations of Prolog. Even C & M say that their
definition should be considered a core set of features, that
each implementation may have different syntax. However,
Borland has done much more than just rename a few predicates.
2. Is Borland's Turbo Prolog really Prolog?
NO. Turbo Prolog lacks features that are an essential part of
any Prolog implementation and requires declarations. Borland
has redefined Prolog to suit themselves, and not for the
better.
A key feature of Lisp and Prolog is the ability to treat
programs and data identically. In Prolog "clause," "call,"
and "=.." are the predicates that allow programs to be treated
as data, and these are missing entirely from Turbo Prolog.
One use of this feature is in providing "how" and "why"
explanations in an expert system. A second use is writing a
Prolog interpreter in Prolog. This is not just a
theoretically elegant feature, it has practical value. For a
specific domain a specialized interpreter can apply domain
knowledge to speed up execution, or an intelligent
backtracking algorithm could be implemented. In C & M Prolog
a Prolog interpreter takes four clauses. Borland gives an
example "interpreter" on page 150 of the Turbo Prolog manual -
nine clauses and twenty-two declarations. However, their
"interpreter" can't deal with any clause, it can only deal
with "clauses" in a very special form. A clause such as
likes(ellen,tennis) would have to be represented as
clause(atom(likes,[symbol(ellen),symbol(tennis)]),[])
in Borland's "interpreter." I don't expect "clause" to
retrieve compiled clauses, but I do expect Prolog to include
it. By dropping it Borland has introduced a distinction
between programs and data that eliminates a key feature of
Prolog.
Turbo Prolog absolutely requires data typing. Optional typing
would be a good feature for Prolog - it can produce better
compiled code and help with documentation. However, required
typing is not part of any other Prolog implementation that I
know of. Typing makes life easier for the Turbo Prolog
compiler writer at the expense of the Turbo Prolog
programmers. A little more effort by the few compiler writers
would have simplified the work of the thousands of potential
users. There are good Prolog compilers in existence that do
not require typing, for example, the compiler for DEC-10
Prolog. It may also be that Borland thought they were
improving Prolog by requiring typing, but again, why not make
it optional?
Besides introducing a distinction between programs and data,
Turbo Prolog weakens the ability to construct terms at run
time. One of the great strengths of Prolog is its ability to
do symbolic computation, and Borland has seriously weakened
this ability. Again this omission seems to be for the
convenience of the compiler writers. There are no predicates
corresponding to the following C & M predicates, even under
other names: "arg," "functor," "name," "=..," "atom,"
"integer," and "atomic." These predicates are used in
programs that parse, build, and rewrite structured terms, for
example, symbolic integration and differentiation programs, or
a program that converts logical expressions to conjunctive
normal form. The predicate "op" is not included in Turbo
Prolog. Full functional notation must be used. You can write
predicates to pretty print terms, and the manual gives an
example of this, but it is work that shouldn't be necessary.
Dropping "op" removed one of Prolog's strongest features for
doing symbolic computation.
Turbo Prolog introduces another distinction between clauses
defined at compile time and facts asserted at run time.
Apparently only ground terms can be asserted, and rules cannot
be asserted. This may be partly a result of having only a
compiler and no interpreter. The predicates for any facts to
be asserted must be declared at compile time. This is another
unecessary distinction for the convenience of the compiler
writers.
One other annoyance is the lack of DCG rules, and the general
difficulty of writing front ends that translate DCG rules and
other "syntactic sugar" notations to Prolog rules.
3. Is Turbo Prolog suitable for real applications?
I think Turbo Prolog could run some real applications, but one
limitation is that a maximum of 500 clauses is allowed for
each predicate. One real application program computes the
intervals of a directed graph representing a program flow
graph. Each node represents a program statement, and each
arc represents a potential transfer of control from the head
node to the tail node. There is a Prolog clause for each node
and a clause for each arc. A program with 501 statements
would exceed Turbo Prolog's limit. I assume Borland could
increase this limit, but as it stands, this is one real
application that Turbo Prolog would not run.
4. Is there anything good about Turbo Prolog?
YES. I like having predicates for windows, drawing, and sound.
It looks easy to write some nice user interfaces using Turbo
Prolog's built in predicates. The manual is well done, with
lots of examples. There is easy access to the facilities of
MS-DOS. There is a good program development environment, with
windows for editing code, running the program, and tracing.
There are also features for allowing programming teams to
create applications - modules and separate name spaces. At
$100 the price is right. If this were Prolog, it would be a
great product.
-- Larry Van Sickle
cs.vansickle@r20.utexas.edu 512-471-9589
------------------------------
Date: Thu 18 Sep 86 20:12:29-CDT
From: CS.VANSICKLE@R20.UTEXAS.EDU
Subject: Simple jobs Turbo Prolog can't do
Two simple things you CANNOT do in Turbo Prolog:
1. Compute lists containing elements of different basic types.
Turbo Prolog does not let you have goals such as
append([a,b,c],[1,2,3],L).
Turbo Prolog requires that the types of every predicate be
declared, but the typing system does not allow you to declare
types that mix basic types. Also lists like:
[1,a]
[2,[3,4]]
[5,a(6)]
cannot be created in Turbo Prolog. The syntax of types is:
a) name = basictype
where basictype is integer, char, real, string or symbol,
b) name = type*
where type is either a basic type or a user defined type,
the asterisk indicates a list,
c) name = f1(d11,...d1n1);f2(d21,...,d2n2);...fm(dm1,...d2nm)
where fi are functors and dij are types, called
"domains." The functors and their domains are
alternative structures allowed in the type being
defined.
The important thing to notice is that you cannot define a type
that has basic types as alternatives. You can only define
alternatives for types that contain functors. So you cannot
define types
mytype = integer;symbol
mylisttype = mytype*
which is what you would need to append a list of integers to a
list of symbols.
What the Turbo Prolog manual recommends for this case is to
define
mytype = s(symbol);i(integer)
mylisttype = mytype*
and declare append as
append(mylisttype,mylisttype,mylisttype)
which would allow us to state the goal
append([s(a),s(b),s(c)],[i(1),i(2),i(3)],L).
This is clumsy, kludgy, and ugly.
2. Compute expressions that contain different basic types or
mixtures of structures and basic types.
Simplifying arithmetic expressions that contain constants and
variables seems like it should be easy in a language designed
to do symbolic computation. In C & M Prolog some rules for
simplifying multiplication might be
simplify(0 * X,0).
simplify(X * 0,0).
simplify(1 * X,X).
simplify(X * 1,X).
In C & M Prolog you can enter goals such as
simplify(a - 1 * (b - c),X).
Now in Turbo Prolog, because of the limited typing, you cannot
have expressions that contain both symbols and integers. (You
also cannot have infix expressions, but that is another
issue). Instead, you would have to do something like this:
exprtype = i(integer);s(symbol);times(exprtype,exprtype)
and declare simplify as:
simplify(exprtype,exprtype)
and the clauses would be:
simplify(times(i(0),X),i(0)).
simplify(times(X,i(0)),i(0)).
simplify(times(i(1),X),X).
simplify(times(X,i(1)),X).
The goal would be:
simplify(minus(s(a),times(i(1),minus(s(b),s(c)))),X).
This should speak for itself, but I'll spell it out:
REAL Prolog can do symbolic computation involving mixtures of
symbols, numeric constants, and expressions; the programs are
simple and elegant; input and output are easy. In Turbo
Prolog you can't even create most of the expressions that real
Prolog can; the programs are long, opaque, and clumsy; you
have to write your own predicates to read and write
expressions in infix notation.
It is a shame that this product comes from a company with
a reputation for good software. If it came
from an unknown company people would be a lot more cautious
about buying it. Since it's from Borland, a lot of people
will assume it's good. They are going to be disappointed.
-- Larry Van Sickle
cs.vansickle@r20.utexas.edu 512-471-9589
------------------------------
End of AIList Digest
********************
∂19-Sep-86 2115 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #189
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 19 Sep 86 21:15:31 PDT
Date: Fri 19 Sep 1986 13:54-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #189
To: AIList@SRI-STRIPE
AIList Digest Saturday, 20 Sep 1986 Volume 4 : Issue 189
Today's Topics:
AI Tools - Symbolics Lisp Release 7,
Games - Connect Four & Computer Chess News,
Applications - Music-Research Digest,
Contest - New Deadline for HP Contest
----------------------------------------------------------------------
Date: 2 Sep 86 20:03:34 GMT
From: mcvax!euroies!ceri@seismo.css.gov (Ceri John Fisher)
Subject: Symbolics Lisp Release 7 (Genera)
Information requested:
Does anybody have any concrete comments on Symbolics Release 7 Common Lisp
and ZetaLisp and new window system. We have release 6 and are rather fear-
fully awaiting the next release since we have started to hear rumours of
large resources required and relatively poor performance (in spite of increased
ease of use). Can anyone confirm or deny this from actual experience ?
Mail me with your comments and I will summarize to the net if there's
enough interest.
Thank you for your attention.
Ceri Fisher, Plessey (UK) Ltd, Christchurch, England.
ceri@euroies.UUCP or ..<your route to europe>!mcvax!ukc!euroies!ceri
<disclaimer, quip> -- currently under revision
------------------------------
Date: 6 Sep 86 19:02:10 GMT
From: well!jjacobs@hplabs.hp.com (Jeffrey Jacobs)
Subject: Re: Symbolics Lisp Release 7 (Genera)
You want Common Lisp, you gotta pay the price <GRIN>! I've heard the same
rumors...
------------------------------
Date: 15 Aug 86 16:11:25 GMT
From: mcvax!botter!klipper!victor@seismo.css.gov (L. Victor Allis)
Subject: Information wanted.
I'm looking for any information I can get on a game which is a
more complex kind of tic-tac-toe. In the Netherlands this game
is called 'vier op een rij', in Germany 'vier gewinnt'.
[Here it's marketed by Milton Bradley as Connect Four. -- KIL]
Description of the game:
'Vier op een rij' is played on a vertical 6 x 7 grid. Two players,
white and black, the former having 21 white, the latter having
21 black stones, play the game by alternately throwing one of
their stones in one of the 7 vertical columns. The stone will
fall down as far as possible.
The goal of the game is to have four of your stones on four
consecutive horizontal, vertical or diagonal positions (like
tic-tac-toe). The one who achieves this first, wins. A draw is
possible, if none achieved this and the grid is full.
White always has the first 'move'.
It is not allowed to pass.
Possible situation in a game:
---------------
| | | | | | | | White (x) will lose this game since in this
--------------- situation he has to play the second column to
| | | |o| | | | prevent black (o) from winning horizontaly,
--------------- but this will give black the possibility to
| | | |x| | | | win diagonaly by playing the second column again.
---------------
| | |o|o|o| | |
---------------
| |x|x|o|x| | |
---------------
|o|x|x|x|o| | |
---------------
I would like to know if there is someone who wrote a program for
this game and any results which were obtained by this program, like:
1) Result of the game after perfect play of both sides.
2) Best opening moves for both sides.
Thanks !
Victor Allis. victor@klipper.UUCP
Free University of Amsterdam.
The Netherlands.
------------------------------
Date: 18 Aug 86 23:27:15 GMT
From: ihnp4!cuae2!ltuxa!ttrdc!levy@ucbvax.Berkeley.EDU (Daniel R. Levy)
Subject: Re: Information wanted.
In article <585@klipper.UUCP>, victor@klipper.UUCP (L. Victor Allis) writes:
>I'm looking for any information I can get on a game which is a
>more complex kind of tic-tac-toe. In the Netherlands this game
>is called 'vier op een rij', in Germany 'vier gewinnt'.
On this vanilla System V R2 3B20 the game is available as /usr/games/connect4
(sorry, no source code came with it on this UNIX-source-licensed system
and even if it did it might be proprietary [ :-) ] but I hope this pointer
is better than nothing).
Please excuse me for posting rather than mailing. My route to overseas sites
seems tenuous at best.
------------------------------- Disclaimer: The views contained herein are
| dan levy | yvel nad | my own and are not at all those of my em-
| an engihacker @ | ployer or the administrator of any computer
| at&t computer systems division | upon which I may hack.
| skokie, illinois |
-------------------------------- Path: ..!{akgua,homxb,ihnp4,ltuxa,mvuxa,
go for it! allegra,ulysses,vax135}!ttrdc!levy
------------------------------
Date: 4 Sep 86 21:42:59 GMT
From: ubc-vision!alberta!tony@uw-beaver.arpa (Tony Marsland)
Subject: Computer Chess News
The June 1986 issue of the ICCA Journal is now being distributed.
The issue contains the following articles:
"Intuition in Chess" by A.D. de Groot
"Selective Search without Tears" by D. Beal
"When will Brute-force Programs beat Kasparov?" by D. Levy
Also there is a complete report on the 5th World Computer Chess Championship
by Helmut Horacek and Ken Thompson, including all the games.
There are many other short articles, reviews and news items.
Subscriptions available from:
Jonathan Schaeffer, Computing Science Dept., Univ. of Alberta,
Edmonton T6G 2H1, Canada.
Cost: $15 for all four 1985 issues
$20 per year beginning 1987, $US money order or check/cheque.
email: jonathan@alberta.uucp for more information.
------------------------------
Date: Sat, 30 Aug 86 11:00:56 GMT
From: Stephen Page
<music-research-request%sevax.prg.oxford.ac.uk@Cs.Ucl.AC.UK>
Subject: New list: Music-Research Digest
COMPUTERS AND MUSIC RESEARCH
An electronic mail discussion group
The Music-Research electronic mail redistribution list was established after a
suggestion made at a meeting in Oxford in July 1986, to provide an effective
and fast means of bringing together musicologists, music analysts, computer
scientists, and others working on applications of computers in music research.
Initially, the list was established for people whose chief interests concern
computers and their applications to
- music representation systems
- information retrieval systems for musical scores
- music printing
- music analysis
- musicology and ethnomusicology
- tertiary music education
- databases of musical information
The following areas are not the principal concern of this list, although
overlapping subjects may well be interesting:
- primary and secondary education
- sound generation techniques
- composition
There are two addresses being used for this list:
- music-research-request@uk.ac.oxford.prg
for requests to be added to or deleted from the list, and other
administrivia for the moderator.
- music-research@uk.ac.oxford.prg
for contributions to the list.
The above addresses are given in UK (NRS) form. For overseas users, the
INTERNET domain-style name for the moderator is
music-research-request@prg.oxford.ac.uk
If your mailer does not support domain-style addressing, get it fixed. For the
moment, explicitly send via the London gateway, using
music-research-request%prg.oxford.ac.uk@cs.ucl.ac.uk
or music-research-request%prg.oxford.ac.uk@ucl-cs.arpa
UUCP users who do not have domain-style addressing may send via Kent:
...!ukc!ox-prg!music-research-request
------------------------------
Date: 8 Sep 86 19:46:47 GMT
From: hpcea!hpfcdc!hpfclp!hpai@hplabs.hp.com (AI)
Subject: New deadline for HP contest
[Forwarded from the Prolog Digest by Laws@SRI-STRIPE.]
Hewlett-Packard has extended the submittal deadline for its AI programming
contest. Software and entry forms must by sent on or before February 1, 1987.
In addition, originality has been added as a judging criterion. That is,
newly written software will be weighted more heavily than ported software.
Revised rules and an entry form follow.
Hewlett-Packard
AI Programming Contest
To celebrate the release of its AI workstation, Hewlett-Packard is
sponsoring a programming contest. Submit your public domain software
by February 1, 1987 to be considered for the following prizes:
First prize: One HP72445A computer (Vectra)
Second prize: One HP45711B computer (Portable Plus)
Third prize: One HP16C calculator (Computer Scientist)
Complete rules follow.
1. All entries must be programs of interest to the symbolic computing
or artificial intelligence communities. They must be executable on
HP9000 Series 300 computers running the HP-UX operating system. This
includes programs written in the Common LISP, C, Pascal, FORTRAN, or
shell script languages, or in any of our third party AI software.
2. All entries must include source code, machine-readable
documentation, a test suite, and any special instructions necessary to
run the software. Entries may be submitted by electronic mail or
shipped on HP formatted 1/4" Certified Data Cartridge tapes.
3. All entries must be in the public domain and must be accompanied
by an entry form signed by the contributor(s). Entries must be sent
on or before February 1, 1987.
4. Only residents of the U.S. may enter. HP employees and their
dependents are ineligible to receive prizes, but are welcome to submit
software. In the case of team entries, each member of the team must
be eligible. No duplicate prizes will be awarded. Disposition of the
prize is solely the responsibility of the winning team.
5. Entries will be judged on the basis of originality, relevance to our
user community, complexity, completeness, and ease of use. The date of
receipt will be used as a tie-breaker. Decision of the judges will be
final.
6. HP cannot return tape cartridges.
7. Selected entries will be distributed by HP on an unsupported
software tape. This tape will be available from HP for a distribution
fee. The contributor(s) of each entry which is selected for this tape
will receive a complimentary copy.
To enter:
Print and complete the following entry form and mail it to:
AI Programming Contest M.S. 99
Hewlett-Packard
3404 E. Harmony Road
Fort Collins, CO 80525
Send your software on HP formatted 1/4"tape to the same address, or
send it via electronic mail to:
hplabs!hpfcla!aicontest or ihnp4!hpfcla!aicontest
[Form deleted: write to the author or check the Prolog Digest. I generally
omit entry forms and conference reservation coupons to save bandwidth,
reduce storage space, and avoid annoying those with slow terminals
or expensive communication links. -- KIL]
------------------------------
End of AIList Digest
********************
∂19-Sep-86 2321 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #190
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 19 Sep 86 23:21:45 PDT
Date: Fri 19 Sep 1986 17:11-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #190
To: AIList@SRI-STRIPE
AIList Digest Saturday, 20 Sep 1986 Volume 4 : Issue 190
Today's Topics:
AI Tools - Xerox Dandelion vs. Symbolics
----------------------------------------------------------------------
Date: 4 Sep 86 14:27:00 GMT
From: princeton!siemens!steve@CAIP.RUTGERS.EDU
Subject: Xerox Dandelion vs. Symbolics?
Why do people choose Symbolics/ZetaLisp/CommonLisp over
Xerox Dandelion/Interlisp?
I have been "brought up" on Interlisp and had virtually no exposure to
Maclisp derivatives, but more to the point, I've been brought up on the
Xerox Dandelion lisp machine and never used a Symbolics. Every chance I
get, I try to find out what a Symbolics/Zetalisp machine has that the
Dandelion doesn't. So far I have found only the following:
1) More powerful machine (but less power per dollar).
2) The standard of Commonlisp (only the past couple years).
3) People are ignorant of what the Dandelion has to offer.
4) Edit/debug cycle (and editor) very similar to old standard systems
such as Unix/C/Emacs or TOPS/Pascal/Emacs, and therefore easier
for beginners with previous experience.
I have found a large number of what seem to be advantages of the Xerox
Dandelion Interlisp system over the Symbolics. I won't post anything
now because this already is too much like an ad for Xerox, but you might
get me to post some separately.
I am not personally affiliated with Xerox (although other parts of my
company are). I am posting this because I am genuinely curious to find
out what I am missing, if anything.
By the way, the Interlisp system on the Dandelion is about 5 megabytes
(it varies depending on how much extra stuff you load in - I've never
seen the system get as large as 6 Mb). I hear that Zetalisp is 24 Mb.
Is that true? What is in it, that takes so much space?
Steven J. Clark, Siemens Research and Technology Laboratory etc.
{ihnp4!princeton | topaz}!siemens!steve
something like this ought to work from ARPANET: steve@siemens@spice.cs.cmu
(i.e. some machines at CMU know siemens).
------------------------------
Date: 5 Sep 86 16:38:57 GMT
From: tektronix!orca!tekecs!mikes@ucbvax.Berkeley.EDU (Michael Sellers)
Subject: Re: Xerox Dandelion vs. Symbolics? [vs. Tek 4400 series]
> Why do people choose Symbolics/ZetaLisp/CommonLisp over
> Xerox Dandelion/Interlisp?
Maybe I'm getting in over my head (and this is not unbiased), but what
about Tek's 4400 series (I think they have CommonLisp & Franz Lisp, but
I could be wrong)? I was under the impression that they offered much
more bang for the buck than did the other major AI workstation folks.
Have you seen these and decided they are not what you want, or are you
unaware of their capabilities/cost?
> ...Dandelion Interlisp system over the Symbolics. I won't post anything
> now because this already is too much like an ad for Xerox, but you might
> get me to post some separately.
Maybe, if we're going to have testimonials, we could nudge someone from
Tek's 4400 group (I know some of them are on the net) into giving us a
rundown on their capabilities.
> I am not personally affiliated with Xerox (although other parts of my
> company are). I am posting this because I am genuinely curious to find
> out what I am missing, if anything.
I am personally affiliated with Tek (in a paycheck sort of relationship),
though not with the group that makes the 4400 series of AI machines. I did
have one on my desk for a while, though (sigh), and was impressed. I think
you're missing a fair amount :-).
> Steven J. Clark, Siemens Research and Technology Laboratory etc.
Mike Sellers
UUCP: {...your spinal column here...}!tektronix!tekecs!mikes
INNING: 1 2 3 4 5 6 7 8 9 TOTAL
IDEALISTS 0 0 0 0 0 0 0 0 0 1
REALISTS 1 1 0 4 3 1 2 0 2 0
------------------------------
Date: 5 Sep 86 17:27:54 GMT
From: gatech!royt@seismo.css.gov (Roy M Turner)
Subject: Re: Xerox Dandelion vs. Symbolics?
In article <25800003@siemens.UUCP> steve@siemens.UUCP writes:
>
>Every chance I
>get, I try to find out what a Symbolics/Zetalisp machine has that the
>Dandelion doesn't. So far I have found only the following:
>...
>Steven J. Clark, Siemens Research and Technology Laboratory etc.
>{ihnp4!princeton | topaz}!siemens!steve
>
As a user of Symbolics Lisp machines, I will try to answer some of Steve's
comments. We have had Symbolics machines here since before I started on my
degree two years ago; we recently were given thirteen Dandelions and two
DandyTigers by Xerox. We use the Symbolics as our research machines, and the
Xerox machines for teaching AI.
The Symbolics are more powerful, as Steve says, and quite possibly he is right
about the power per dollar being less for them than for Xerox; since the Xerox
machines were free to us, certainly he's right in our case! :-) However, I
find the Dandelions abysmally slow for even small Lisp programs, on the order
of the ones we use in teaching (GPS (baby version), micro versions of SAM,
ELI, etc.). To contemplate using them for the very large programs that we
develop as our research would be absurd--in my opinion, of course.
The "standard" of CommonLisp will (so Xerox tells us) be available for the
Dandelions soon...'course, they've been saying that for some time now :-). So
the two machines may potentially be equal on that basis. ZetaLisp is quite
close to CommonLisp (since it was one of the dialects Common Lisp is based
on), and also close to other major dialects of lisp--Maclisp, etc.--enough so
that I've never had any trouble switching between it and other lisps...with
one exception--you guessed it, Interlisp-D. I realize that whatever you are
used to colors your thinking, but Lord, that lisp seems weird to me! I mean,
comments that return values?? Gimme a break!
"People are ignorant of what the Dandelion has to offer." I agree. I'm one
of the people. It has nice windows, much less complicated than Symbolics.
MasterScope is nice, too. So is the structure editor, but that is not too
much of a problem to write on any other lisp machine, and is somewhat
confusing to learn (at least, that's the attitude I perceive in the students).
What the Dandelions *lack*, however, is any decent file manipulation
facilities (perhaps Common Lisp will fix this), a nice way of handling
processes, a communications package that works (IP-TCP, at least the copy we
received, will trash the hard disk when our UNIX machines write to the
DandyTigers...the only thing that works even marginally well is when we send
files from the Symbolics! Also, the translation portion of the communication
package leaves extraneous line-feeds, etc., lying about in the received file),
and A DECENT EDITOR! Which brings us to the next point made by Steve:
>4) Edit/debug cycle (and editor) very similar to old standard systems
> such as Unix/C/Emacs or TOPS/Pascal/Emacs, and therefore easier
> for beginners with previous experience.
This is true. However, it is also easier for experts and semi-experts (like
me) who may or may not have had prior experience with EMACS. The Dandelions
offer a structure editor (and Tedit for text, but that doesn't count) and
that's it...if you want to edit something, you do it function by function.
Typically, what I do and what other people do on the Xerox machines is enter a
function in the lisp window, which makes it very difficult to keep track of
what you are doing in the function, and makes it mandatory that you enter
one function at a time. Also, the function is immediately evaluated (the
defineq is, that is) and becomes part of your environment. Heaven help you if
you didn't really mean to do it! At least with ZMACS you can look over a file
before evaluating it. Another gripe. Many of our programs used property
lists, laboriously entered via the lisp interactor. We do a makefile, and
voila--next time we load the file, the properties aren't there! This has yet
to happen when something is put in an edit buffer and saved to disk on the
Symbolics. Perhaps there is a way of editing on the Xerox machines that lends
itself to editing files (and multiple files at once), so that large programs
can be entered, edited, and documented (Interlisp-D comments are rather bad
for actually documenting code) easily...if so, I haven't found it.
Another point in Symbolics favor: reliability. Granted, it sometimes isn't
that great for Symbolics, either, but we have had numerous, *numerous*
software and hardware failures on the Dandelions. It's so bad that we have to
make sure the students save their functions to disk often, and have even had
to teach them how to copy sysouts and handle dead machines, since the machines
lock up from time to time with no apparent cause. And the students must be
cautioned not to save their stuff only to one place, but to save it to the
file server, a floppy, and anywhere else they can, since floppies are trashed
quite often. Dribble to the hard disk, forget to turn dribble off, there goes
the hard disk... Type (logout t) on the Dandelions to cause it not to save
your world, and there goes the Dandelion (it works on the DandyTigers).
About worlds and sysouts. The Symbolics has a 24-30 meg world, something like
that. This is *not* just lisp--it is your virtual memory, just as it is in a
Xerox Sysout. The difference in size reflects the amount of space you have at
your disposal when creating conses, not the relative sizes of system software
(though I imagine ZetaLisp is larger than Interlisp-D). You do not
necessarily save a world each time you logout from a Symbolics; you do on a
Dandelion...thus the next user who reboots a Symbolics gets a clean lisp,
whereas the next user of a Dandelion gets what was there before unless he
first copies another sysout and boots off of it. It is, however, much harder
to save a world on the Symbolics than on the Xerox machines.
Well, I suppose I have sounded like a salesman for Symbolics. I do not mean
to imply that Symbolics machines are without faults, nor do I mean to say that
Xerox machines are without merit! We are quite grateful for the gift of the
Xerox machines; they are useful for teaching. I just tried to present the
opinions of one Symbolics-jaded lisp machine user.
Back to the Symbolics machine now...I suppose that the DandyTiger beside it
will bite me! :-)
Roy
------------------------------
Date: 6 Sep 86 22:36:43 GMT
From: jade!violet.berkeley.edu!mkent@ucbvax.Berkeley.EDU
Subject: Re: Xerox Dandelion vs. Symbolics?
As a long-term user of Interlisp-D, I'd be very interested in hearing an
*informed* comparison of it with ZetaLisp. However, I'm not particularly
interested in hearing what an experienced Zetalisp user with a couple of
hours of Interlisp experience has to say on the topic, other than in
regard to issues of transfer and learnability. I spent about 4 days using
the Symbolics, and my initial reaction was that the user interface was out
of the stone age. But I realize this has more to do with *my* background
then with Zetalisp itself.
Is there anyone out there with *non-trivial* experience with *both*
environments who can shed some light on the subject?
Marty Kent
"You must perfect the Napoleon before they finish Beef Wellington! The
future of Europe hangs in the balance..."
------------------------------
Date: 9 Sep 86 06:14:00 GMT
From: uiucdcsp!hogge@a.cs.uiuc.edu
Subject: Re: Xerox Dandelion vs. Symbolics?
>...I spent about 4 days using
>the Symbolics, and my initial reaction was that the user interface was out
>of the stone age. But I realize this has more to do with *my* background
>then with Zetalisp itself.
Four days *might* be enough time to get familiarize yourself with the help
mechanisms, if that's specifically what you were concentrating on doing.
Once you learn the help mechanisms (which aren't bundled all that nicely and
are rarely visible on the screen), your opinion of the user interface will
grow monotonically with use. If you are interested in having more visible
help mechanisms for first-time users, check out what the TI Explorer adds to
the traditional Zetalisp environment. LMI and Sperry also provide their own
versions of the environment.
--John
------------------------------
Date: 10 Sep 86 10:35:40 GMT
From: mob@MEDIA-LAB.MIT.EDU (Mario O. Bourgoin)
Subject: Re: Xerox Dandelion vs. Symbolics?
In article <3500016@uiucdcsp>, hogge@uiucdcsp.CS.UIUC.EDU writes:
> >...I spent about 4 days using
> >the Symbolics, and my initial reaction was that the user interface was out
> >of the stone age.....
>
> Four days *might* be enough time to get familiarize yourself with the help
> mechanisms, if that's specifically what you were concentrating on doing.
Four days to learn the help mechanisms? Come on, an acceptable user
interface should give you control of help within minutes ←not days←.
Seriously folks, it took me less than 10 seconds to learn about
ZMACS's apropos on the old CADRs and before the end of the day, I knew
about a lot more. Have you ever used the "help" key? The Symbolics's
software isn't much different from the CADR's. I'll grant that the
lispm's presentation of information isn't that obvious or elegant but
it isn't stone age and doesn't require 4 days to get a handle on.
If you're arguing internals, I haven't worked with the Dandelion so I
can't provide an opinion on it. The CADR's user interface software was
certainly featureful and appeared to my eyes to come from a different
school than what I later saw of Xerox's software. It is useful and
manipulable but didn't look intended to be programmed by anyone just
off the street. If you want to learn the internals of the user
interface, ←then← i'll grant you four days (and more).
--Mario O. Bourgoin
------------------------------
Date: 10 Sep 86 15:23:29 GMT
From: milano!Dave@im4u.utexas.edu
Subject: Re: 36xx vs. Xerox
A few to add to pro-36xx list:
5. Reliable hardware
6. Reliable software
7. Good service
A year ago, I was on project which used
Dandeanimals. As a group, they were up about 60% of the time, and
there were days when all 5 were down. The extra screw was that
the first level of repair was a photocopier repairman. It always
took several days before we got people who knew something about the
machines.
Dave Bridgeland -- MCC Software Technology (Standard Disclaimer)
ARPA: dave@mcc.arpa
UUCP: ut-sally!im4u!milano!daveb
"Things we can be proud of as Americans:
* Greatest number of citizens who have actually boarded a UFO
* Many newspapers feature "JUMBLE"
* Hourly motel rates
* Vast majority of Elvis movies made here
* Didn't just give up right away during World War II
like some countries we could mention
* Goatees & Van Dykes thought to be worn only by weenies
* Our well-behaved golf professionals
* Fabulous babes coast to coast"
------------------------------
End of AIList Digest
********************
∂20-Sep-86 0139 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #191
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 20 Sep 86 01:39:06 PDT
Date: Fri 19 Sep 1986 17:23-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #191
To: AIList@SRI-STRIPE
AIList Digest Saturday, 20 Sep 1986 Volume 4 : Issue 191
Today's Topics:
AI Tools - Xerox Dandelion vs. Symbolics
----------------------------------------------------------------------
Date: 8 Sep 86 17:35:52 GMT
From: hpcea!hpfcdc!hpcnoe!jad@hplabs.hp.com (John Dilley)
Subject: Re: Xerox Dandelion vs. Symbolics? [vs. Tek 4400 series]
> Why do people choose Symbolics/ZetaLisp/CommonLisp over
> Xerox Dandelion/Interlisp?
> ...
> 3) People are ignorant of what the Dandelion has to offer.
I have a file of quotes, one of which has to do with this
problem Xerox seems to have. I've heard great things about
Dandelion/Interlisp, and their Smalltalk environments, but have
never seen one of these machines in "real life" (whatever that
is). Anyway, the quote I was referring to is:
"It doesn't matter how great the computer is if nobody buys it. Xerox
proved that."
-- Chris Espinosa
And while we're at it ... remember Apple?
"One of the things we really learned with Lisa and from looking at what
Xerox has done at PARC was that we could construct elegant, simple systems
based on just a bit map..."
-- Steve Jobs
Seems like Xerox needed more advertising or something. It's a
shame to see such nice machines go unnoticed by the general
public, especially considering what choices we're often left with.
-- jad --
John A Dilley
Phone: (303)229-2787
Email: {ihnp4,hplabs} !hpfcla!jad
(ARPA): hpcnoe!jad@hplabs.ARPA
Disclaimer: My employer has no clue that I'm going to send this.
------------------------------
Date: 11 Sep 86 17:58:23 GMT
From: gatech!royt@seismo.css.gov (Roy M Turner)
Subject: Re: Xerox Dandelion vs. Symbolics?
In response to a prior posting by me, Marty (mkent@violet.berkely.edu) writes:
>
> As a long-term user of Interlisp-D, I'd be very interested in hearing an
>*informed* comparison of it with ZetaLisp. However, I'm not particularly
>interested in hearing what an experienced Zetalisp user with a couple of
>hours of Interlisp experience has to say on the topic...
> ...
Who, me? :-)
If I was unclear in my posting, I apologize. I have had a bit more than two
hours of experience w/ Dandelions. I used them in a class I was taking, and
also was partly responsible for helping new users and for maintaining some
of the software on them. Altogether about 4 months of fairly constant use.
Another posting said we were using outdated software; that is undoubtedly
correct, as we just got Coda; we were using Intermezzo. Some problems
are probably fixed. However, we have not received the new ip-tcp from
Xerox...but, what do you expect with free machines? :-)
Roy
Above opionions my own...'course, they *should* be everyone's! :-)
Roy Turner
School of Information and Computer Science
Georgia Insitute of Technology, Atlanta Georgia, 30332
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!royt
------------------------------
Date: 12 Sep 86 14:58:07 GMT
From: wdmcu%cunyvm.bitnet@ucbvax.Berkeley.EDU
Subject: Re: Xerox Dandelion vs. Symbolics?
In article <3500016@uiucdcsp>, hogge@uiucdcsp.CS.UIUC.EDU says:
>Once you learn the help mechanisms (which aren't bundled all that nicely and
>are rarely visible on the screen), your opinion of the user interface will
>grow monotonically with use. If you are interested in having more visible
↑↑↑↑↑↑↑↑↑↑↑↑↑
Could you please define this word in this context.
Thanks.
(This is a serious question)
/*--------------------------------------------------------------------*/
/* Bill Michtom - work: (212) 903-3685 home: (718) 788-5946 */
/* */
/* WDMCU@CUNYVM (Bitnet) Timelessness is transient */
/* BILL@BITNIC (Bitnet) */
/* */
/* Never blame on malice that which can be adequately */
/* explained by stupidity. */
/* A conclusion is the place where you got tired of thinking. */
/*--------------------------------------------------------------------*/
------------------------------
Date: 12 Sep 86 07:31:00 GMT
From: uiucdcsp!hogge@a.cs.uiuc.edu
Subject: Re: Xerox Dandelion vs. Symbolics?
>> Four days *might* be enough time to get familiarize yourself with the help
>> mechanisms, if that's specifically what you were concentrating on doing.
>
>Four days to learn the help mechanisms? Come on, an acceptable user
>interface should give you control of help within minutes ←not days←.
>Seriously folks, it took me less than 10 seconds to learn about
>ZMACS's apropos on the old CADRs and before the end of the day, I knew
>about a lot more. Have you ever used the "help" key?
>software isn't much different from the CADR's. I'll grant that the
>lispm's presentation of information isn't that obvious or elegant but
>it isn't stone age and doesn't require 4 days to get a handle on.
There's more subtle help available on the machine than just the help key,
and my experience is that it takes a long time for one to learn the
mechanisms that are there. The HELP key *is* the main source of help, but not
the only source. Examples include: 1. use of Zmacs meta-point to find
examples of how to do things (such as hack windows) from the system source,
2. use of c-/ in the Zmacs minibuffer for listing command completions (and
what a drag if you don't know about this command) 3. the importance of
reading who-line documentation 4. use of the Apropos function to hunt down
useful functions, as well as WHO-CALLS 5. use of the various Lisp Machine
manufacturer's custom help mechanisms, such as the Symbolics flavor examiner
and documentation examiner, or TI's Lisp-completion input editor commands and
Suggestions Menus.
The Lisp Machine is a big system, and there's lots of good help available.
But it isn't trivial learning how to get it nor when to seek it.
--John
------------------------------
Date: 12 Sep 86 14:42:58 GMT
From: ihnp4!wucs!sbc@ucbvax.Berkeley.EDU (Steve Cousins)
Subject: Re: Xerox Dandelion vs. Symbolics?
In article <322@mit-amt.MIT.EDU> mob@mit-amt.UUCP writes:
>... It is useful and
>manipulable but didn't look intended to be programmed by anyone just
>off the street. If you want to learn the internals of the user
>interface, ←then← i'll grant you four days (and more).
>
>--Mario O. Bourgoin
I think you could argue that *no* machine (AI or otherwise) can be programmed
by anyone just off the street :-). I haven't used the Symbolics, but my
view of the Dandelion has changed drastically since taking a course on it
by Xerox. The interface is very powerful and well-integrated, but the
"infant mortality curve" (the time to get good enough not to crash the
machines) is somewhat high. [Disclaimer: These machines are supposed to
be much better when networked than stand-alone. My change in attitude
occurred just as we got ours on the network, and I'm not sure how much
to attribute to the class, and how much to attribute to the network].
I like the Dandelion now, but the first 4 days did not give me a good
impression of the machine. There is a lot to say about learning a new machine
from a guru...
Steve Cousins ...ihnp4!wucs!sbc or sbc@wucs.UUCP
Washington University
------------------------------
Date: 15 Sep 86 12:58:18 GMT
From: clyde!watmath!watnot!watmum!rgatkinson@caip.rutgers.edu
(Robert Atkinson)
Subject: Re: Xerox Dandelion vs. Symbolics? [vs. Tek 4400 series]
In article <580001@hpcnoe.UUCP> jad@hpcnoe.UUCP (John Dilley) writes:
>> Why do people choose Symbolics/ZetaLisp/CommonLisp over
>> Xerox Dandelion/Interlisp?
>> ...
>> 3) People are ignorant of what the Dandelion has to offer.
>
> I have a file of quotes, one of which has to do with this
> problem Xerox seems to have. I've heard great things about
> Dandelion/Interlisp, and their Smalltalk environments, but have
> never seen one of these machines in "real life" (whatever that
...
Smalltalk is now (finally!) available from Xerox. An organization
known as Parc Place Systems is now licensing both the virtual
image and virtual machine implementations for Suns and other
workstations. For further info contact:
Duane Bay
Parc Place Systems
3333 Coyote Hill Rd
Palo Alto, CA 94304
-bob
------------------------------
Date: 17 Sep 86 16:49:00 GMT
From: princeton!siemens!steve@CAIP.RUTGERS.EDU
Subject: Dandelion vs Symbolics
I have received enough misinformation about Dandelions and Symbolics
machines (by net and mail) that I feel forced to reply. This is not,
however, the last word I have to say. I like to keep the net in suspense,
acting like I'm saving the BIG REVELATION for later.
Key: S= Symbolics, X = Xerox Dandelions
- point against X, for S
= point for X, against S
* misinformation against X, fact in favor (my opinion of course)
? point not classifiable in previous categories
A writer who prefers to remain anonymous pointed out:
- If your system is bigger than 32 Mb, it can't be done on a Xerox machine.
- It takes a great deal of effort to get good numerical performance on X.
- X. editor is slow on big functions & lists. My opinion is that it is
bad programming style to have such large functions. However, sometimes
the application dictates and so this is a point.
* "Garbage collection is much more sophisticated on Symbolics" To my
knowledge this is absolutely false. S. talks about their garbage
collection more, but X's is better. Discuss this more later if you want.
* Preference for Zmacs with magic key sequences to load or compile portions
of a file, over Dandelion. People who never learn how to use the X system
right have this opinion. more later.
* "Symbolics system extra tools better integrated" Again, to my knowledge
this is false. I know people who say no two tools of S. work together
without modification. I have had virtually no trouble with diverse X.
tools working together.
? "S. has more tools and functions available e.g. matrix pkg." On the other
hand I have heard S. described as a "kitchen sink" system full of many
slightly different versions of the same thing.
There is a general belief that the reason the X system is around 5 - 6 Mb
vs. S. around 24 is that S. includes more tools & packages.
+ When you load in most of the biggest of the tools & packages to the X
system you still are down around 6 - 7 Mb!
+ If your network is set up reasonably, then it is trivial to load whatever
packages you want. It is very nice NOT to have junk cluttering up your
system that you don't want.
? "The difference in size reflects how much space you have for CONSes, etc."
Huh? I have 20Mb available, yet I find myself actually using less than
7Mb. My world is 7Mb. If I CONS a list 3 Mb long, my world will be 10Mb.
Royt@gatech had some "interesting" observations:
+ Performance per dollar: you can get at least 5 X machines for the cost
of a single S machine. AT LEAST. Both types prefer to be on networks
with fileservers etc., which adds overhead to both.
? X abysmally slow for baby GPS etc. My guess is that whoever ported/wrote
the software didn't know how to get performance out of the X machines.
It's not too hard, but it's not always obvious either.
= Xerox is getting on the Commonlisp bandwagon only a little late. But how
"common" is Commonlisp when window packages are machine dependent?
= For every quirk you find in Interlisp (".. Lord, that lisp seems weird to
me! I mean, comments that return values??"), I can find one in Commonlisp.
(Atoms whose print names are 1.2 for example.)
+ X has nice windows, less complicated than S. No one i know has ever crashed
a X machine by messing with the windows. Opposite for S. machine.
+ structure editor on X machine, none on S.
* "Dandelions *lack* decent file manipulation..." Wrong, comment later.
? he has bad experience with the old IP/TCP package. Me too, but the new
one works great. (The X NS protocols actually are quite good but the rest
of the world doesn't speak them :-().
? "..Typically, what I do and what other people do .. is enter a function in
the lisp window, which makes it very difficult ..." Didn't you realise
you must be doing something wrong? That's not how you enter functions!
You give other examples of how you and your cohorts don't know how to
use the Xerox system right. You're too stuck on the old C & Fortran
kinds of editing and saving stuff.
* He goes on about reliability of X being the pits. Every person I have
known who learned to use the X machine caused it to crash in various
ways, but by the time (s)he had enough experience to be able to explain
what he did to someone else, the machine no longer crashed. I guess
the X machines have a "novice detector". My understanding is that
S has its problems too.
One guy had bad experience with KEE, which was developed on X. I do not
think his experience is representative. What he did say was that it kept
popping up windows he didn't want; X systems make much more use of
sophisticated window and graphic oriented tools and interfaces than S,
but it doesn't often pop up useless windows in general.
Dave@milano thinks S offers reliable hardware, reliable software, and
good service that X doesn't. WRONG! At his site, they were obviously
doing something sytematically wrong with their machines, and they didn't
get a good repairman. I can give you horror stories about Symbolics, too,
but I have some pretty reliable points:
+ At a site I know they have around 20 S. They have sophisticated users
and they do their own board swapping. Still they have 10% downtime.
+ At my site we have very roughly 20 machine-years with X. Total downtime
less than 2 machine weeks.
+ S. has such hardware problems that a) they have a "lemon" program where
you can return your machine for a new one, b) their service contracts
are OUTRAGEOUSLY EXPENSIVE!
These lisp machines are very complex systems. If you don't have someone
teach you, who already knows the right ways to use the machine, then it
will take you more than 4 months to learn how to use it to the best
advantage. Hell, I've been using a Dandelion almost constantly for close
to three years and there are still subsystems that I only know superficially,
and which I know I could make better use of! If the same isn't true of
Symbolics it can only be because the environment is far less rich. It is
not difficult to learn these subsystems; the problem is there's just SO
MUCH to learn. Interlisp documentation was just re-done and it's 4.5 inches
thick! (Used to be only 2.25)
Finally, I will expound a little on why Xerox is better than Symbolics.
The Xerox file system and edit/debug cycle is far superior to an old-
fashioned standard system like Symbolics which has a character-oriented
editor like Zmacs. The hard part for many people to learn the Xerox file
system, is that first they have to forget what they know about editors and
files. A lot of people are religious about their editors, so this step
can be nearly impossible. Secondly, the documentation until the new version
was suitable primarily for people who already knew what was going on. That
hurt a lot. (It took me maybe 1.5 years before I really got control of the
file package, but I was trying to learn Lisp in the first place, and
everything else at the same time.) Now it's much much faster to learn.
The old notion of files and editors is like assembly language. Zmacs with
magic key sequences to compile regions etc. is like a modern, good assembler
with powerful macros and data structures and so forth. Xerox's file system
is like Fortran/Pascal/C. Ask the modern assembly programmer what he sees
in Fortranetc. and he'll say "nothing". It'll be hard for him to learn.
He's used to the finer grain of control over the machine that assembly gives
him and he doesn't understand how to take advantage of the higher level
features of the Fortranetc. language. Before you flame at me too much,
remember I am analogizing to a modern, powerful assembler not the trash
you used 5 years ago on your TRS-80. The xerox file package treats a file
as a database of function definitions, variable values, etc. and gives you
plenty of power to deal with them as databases. This note is long enough
and I don't know what else to say so I'll drop this topic somewhat unfinished
(but I will NOT give lessons on how to use the Xerox file package).
A final final note: the guy down the hall from me has used S. for some
years and now has to learn X. He isn't complaining too much. I hope he'll
post his own remarks soon, but I've got to relate one story. I wanted to
show him something, and of course when I went to run it it didn't work
right. As I spent a minute or two eradicating the bug, he was impressed
by the use of display-oriented tools on the Dandelion. He said, "Symbolics
can't even come close."
Steven J. Clark, Siemens RTL
{ihnp4!princeton or topaz}!siemens!steve
------------------------------
End of AIList Digest
********************
∂21-Sep-86 0022 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #192
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 21 Sep 86 00:22:26 PDT
Date: Sat 20 Sep 1986 22:04-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #192
To: AIList@SRI-STRIPE
AIList Digest Sunday, 21 Sep 1986 Volume 4 : Issue 192
Today's Topics:
Conferences - AI and Law &
Logic in Computer Science &
SIGIR R&D in Information Retrieval &
Logical Solutions to the Frame Problem &
CSCW '86 Program
----------------------------------------------------------------------
Date: 13 Aug 86 20:36:33 EDT
From: MCCARTY@RED.RUTGERS.EDU
Subject: Conference - AI and Law
CALL FOR PAPERS:
First International Conference on
ARTIFICIAL INTELLIGENCE AND LAW
May 27-29, 1987
Northeastern University
Boston, Massachusetts, USA
In recent years there has been an increased interest in the applications of
artificial intelligence to law. Some of this interest is due to the potential
practical applications: A number of researchers are developing legal expert
systems, intended as an aid to lawyers and judges; other researchers are
developing conceptual legal retrieval systems, intended as a complement to the
existing full-text legal retrieval systems. But the problems in this field are
very difficult. The natural language of the law is exceedingly complex, and it
is grounded in the fundamental patterns of human common sense reasoning. Thus,
many researchers have also adopted the law as an ideal problem domain in which
to tackle some of the basic theoretical issues in AI: the representation of
common sense concepts; the process of reasoning with concrete examples; the
construction and use of analogies; etc. There is reason to believe that a
thorough interdisciplinary approach to these problems will have significance
for both fields, with both practical and theoretical benefits.
The purpose of this First International Conference on Artificial Intelligence
and Law is to stimulate further collaboration between AI researchers and
lawyers, and to provide a forum for the latest research results in the field.
The conference is sponsored by the Center for Law and Computer Science at
Northeastern University. The General Chair is: Carole D. Hafner, College of
Computer Science, Northeastern University, 360 Huntington Avenue, Boston MA
02115, USA; (617) 437-5116 or (617) 437-2462; hafner.northeastern@csnet-relay.
Authors are invited to contribute papers on the following topics:
- Legal Expert Systems
- Conceptual Legal Retrieval Systems
- Automatic Processing of Natural Legal Texts
- Computational Models of Legal Reasoning
In addition, papers on the relevant theoretical issues in AI are also invited,
if the relationship to the law can be clearly demonstrated. It is important
that authors identify the original contributions presented in their papers, and
that they include a comparison with previous work. Each submission will be
reviewed by at least three members of the Program Committee (listed below), and
judged as to its originality, quality and significance.
Authors should submit six (6) copies of an Extended Abstract (6 to 8 pages) by
January 15, 1987, to the Program Chair: L. Thorne McCarty, Department of
Computer Science, Rutgers University, New Brunswick NJ 08903, USA; (201)
932-2657; mccarty@rutgers.arpa. Notification of acceptance or rejection will
be sent out by March 1, 1987. Final camera-ready copy of the complete paper
(up to 15 pages) will be due by April 15, 1987.
Conference Chair: Carole D. Hafner Northeastern University
Program Chair: L. Thorne McCarty Rutgers University
Program Committee: Donald H. Berman Northeastern University
Michael G. Dyer UCLA
Edwina L. Rissland University of Massachusetts
Marek J. Sergot Imperial College, London
Donald A. Waterman The RAND Corporation
------------------------------
Date: Tue, 9 Sep 86 09:26:57 PDT
From: Moshe Vardi <vardi@navajo.stanford.edu>
Subject: Conference - Logic in Computer Science
CALL FOR PAPERS
SECOND ANNUAL SYMPOSIUM ON
LOGIC IN COMPUTER SCIENCE
22 - 25 June 1987
Cornell University, Ithaca, New York, USA
THE SYMPOSIUM will cover a wide range of theoretical and practical
issues in Computer Science that relate to logic in a broad sense,
including algebraic and topological approaches.
Suggested (but not exclusive) topics of interest include: abstract
data types, computer theorem proving, verification, concurrency, type
theory and constructive mathematics, data base theory, foundations of
logic programming, program logics and semantics, knowledge and belief,
software specifications, logic-based programming languages, logic in
complexity theory.
Organizing Committee
K. Barwise E. Engeler A. Meyer
W. Bledsoe J. Goguen R. Parikh
A. Chandra (chair) D. Kozen G. Plotkin
E. Dijkstra Z. Manna D. Scott
Program Committee
S. Brookes D. Gries (chair) J.-P. Jouannaud A. Nerode
L. Cardelli J. Goguen R. Ladner G. Plotkin
R. Constable Y. Gurevich V. Lifschitz A. Pnueli
M. Fitting D. Harel G. Longo P. Scott
PAPER SUBMISSION. Authors should send 16 copies of a detailed abstract
(not a full paper) by 9 DECEMBER 1986 to the program chairman:
David Gries -- LICS (607) 255-9207
Department of Computer Science gries@gvax.cs.cornell.edu
Cornell University
Ithaca, New York 14853
Abstracts must be clearly written and provide sufficient detail to allow the
program committee to assess the merits of the paper. References and
comparisons with related work should be included where appropriate. Abstracts
must be no more than 2500 words. Late abstracts or abstracts departing
significantly from these guidelines run a high risk of not being considered.
If a copier is not available to the author, a single copy of the abstract
will be accepted.
Authors will be notified of acceptance or rejection by 30 JANUARY 1987.
Accepted papers, typed on special forms for inclusion in the symposium
proceedings, will be due 30 MARCH 1987.
The symposium is sponsored by the IEEE Computer Society, Technical
Committee on Mathematical Foundations of Computing and Cornell
University, in cooperation with ACM SIGACT, ASL, and EATCS.
GENERAL CHAIRMAN LOCAL ARRANGEMENTS
Ashok K. Chandra Dexter C. Kozen
IBM Thomas J. Watson Research Center Department of Computer Science
P.O. Box 218 Cornell University
Yorktown Heights, New York 10598 Ithaca, New York 14853
(914) 945-1752 (607) 255-9209
ashok@ibm.com kozen@gvax.cs.cornell.edu
------------------------------
Date: Tue, 12 Aug 86 16:16:01 cdt
From: Don <kraft%lsu.csnet@CSNET-RELAY.ARPA>
Subject: Conference - SIGIR Conf. on R&D in Information Retrieval
Association for Computing Machinery (ACM)
Special Interest Group on Information Retrieval (SIGIR)
1987 International Conference on Research and Development
in Information Retrieval
June 3-5, 1987 Monteleone Hotel (in the French Quarter)
New Orleans, Louisiana USA
CALL FOR PAPERS
Papers are invited on theory, methodology, and applications
of information retrieval. Emerging areas related to infor-
mation retrieval, such as office automation, computer
hardware technology, and artificial intelligence and natural
language processing are welcome.
Topics include, but are not limited to:
retrieval system modeling user interfaces
retrieval in office environments mathematical models
system development and evaluation natural language processing
knowledge representation linguistic models
hardware development complexity problems
multimedia retrieval storage and search techniques
cognitive and semantic models retrieval system performance
information retrieval and database management
Submitted papers can be either full length papers of approx-
imately twenty to twenty-five pages or extended abstracts of
no more than ten pages. All papers should contain the
authors' contributions in comparison to existing solutions
to the same or to similar problems.
Important Dates
Submission Deadline December 15, 1986
Acceptance Notification February 15, 1987
Final Copy Due March 20, 1987
Conference June 3-5, 1987
Four copies of each paper should be submitted. Papers sub-
mitted from North America can be sent to Clement T. Yu; sub-
missions from outside North America should be sent to C. J.
"Keith" van Rijsbergen.
Conference Chairman Program Co-Chairmen
Donald H. Kraft Clement T. Yu C. J. "Keith" van Rijsbergen
Department of Department of Department of
Computer Science Electrical Engineering Computer Science
Louisiana State University and Computer Science University of Glascow
Baton Rouge, LA 70803 University of Illinois, Lilybank Gardens
Chicago Glascow G12 8QQ
Chicago, IL 60680 SCOTLAND
(504) 388-1495 (312) 996-2318 (041) 339-8855
For details, contact the Conference Chairman at kraft%lsu@csnet-relay or
Michael Stinson, the Arrangements Chairman at stinson%lsu@csnet-relay.
Don Kraft
kraft%lsu@csnet-relay
------------------------------
Date: Fri, 19 Sep 86 16:04:04 CDT
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Conference - Logical Solutions to the Frame Problem
CALL FOR PAPERS
WORKSHOP ON LOGICAL SOLUTIONS TO THE FRAME PROBLEM
The American Association for Artificial Intelligence (AAAI) is
sponsoring this workshop in Lawrence, Kansas from March 23 to March 25,1987.
The frame problem is one of the most fundamental problems in
Artificial Intelligence and essentially is the problem of describing in
a computationally reasonable manner what properties persist and what
properties change as action are performed. The intrinsic problem lies in
the fact that we cannot expect to be able to exhaustively list for every
possible action (or combination of concurrent actions) and for every
possible state of the world how that action (or concurrent actions) change
the truth or falsity of each individual fact. We can only list the obvious
results of the action and hope that our basic inferential system will be
able to deduce the truth or falsity of the other less obvious facts.
In recent years there have been a number of approaches to constructing
new kinds of logical systems such as non-monotonic logics, default logics,
circumscription logics, modal reflexive logics, and persistence logics which
hopefully can be applied to solving the frame problem by allowing the missing
facts to be deduced. This workshop will attempt to bring together the
proponents of these various approaches.
Papers on logics applicable to the problem of reasoning about such
unintended consequences of actions are invited for consideration. Two
copies of either an extended abstract or a full length paper should be
sent to the workshop chairman before Nov 20,1986. Acceptance notices will
be mailed by December 1,1986 along with instructions for preparing the final
versions of accepted papers. The final versions are due January 12,1987.
In order to encourage vigorous interaction and exchange of ideas
the workshop will be kept small -- about 25 participants. There will
be individual presentations and ample time for technical discussions.
An attempt will be made to define the current state of the art and future
research needs.
Partial travel support (from AAAI) for participants is available.
Workshop Chairman:
Dr. Frank M. Brown
Dept Computer Science
110 strong Hall
The University of Kansas
Lawrence, Kansas
(913) 864-4482
Please send any net inquiries to: veach@ukans.csnet
------------------------------
Date: Tue 2 Sep 86 15:20:55-EDT
From: Irene Greif <GREIF@XX.LCS.MIT.EDU>
Subject: Conference - CSCW '86 Program
Following is the program for CSCW '86: the Conference on
Computer-Supported Cooperative Work . Registration material can
be obtained from Barbara Smith at MCC (basmith@mcc).
[Contact the author for the full program. -- KIL]
------------------------------
End of AIList Digest
********************
∂21-Sep-86 0150 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #193
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 21 Sep 86 01:50:47 PDT
Date: Sat 20 Sep 1986 23:11-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #193
To: AIList@SRI-STRIPE
AIList Digest Sunday, 21 Sep 1986 Volume 4 : Issue 193
Today's Topics:
Seminars - Backtrach Search for Constraint Satisfaction (SRI) &
Minisupercomputers and AI Machines (CMU) &
Equal Opportunity Interactive Systems (SU),
Seminars (Past) - AI in Communication Networks (Rutgers) &
Goal Integration in Heuristic Algorithm Design (Rutgers) &
Long-Term Planning Systems (TI) &
Learning by Understanding Analogies (SRI) &
Belief Revision (SRI) &
Factorization in Experiment Generation (SRI)
----------------------------------------------------------------------
Date: Wed 17 Sep 86 13:39:03-PDT
From: Amy Lansky <LANSKY@SRI-VENICE.ARPA>
Subject: Seminar - Backtrach Search for Constraint Satisfaction (SRI)
IMPROVING BACKTRACK SEARCH ALGORITHMS
FOR CONSTRAINT-SATISFACTION PROBLEMS
Rina Dechter (DECHTER@CS.UCLA.EDU)
Cognitive System Laboratory, Computer Science Department, U.C.L.A.
and
Artificial Intelligence Center, Hughes Aircraft Company
11:00 AM, TUESDAY, September 23
SRI International, Building E, Room EJ228
The subject of improving search efficiency has been on the agenda of
researchers in the area of Constraint-Satisfaction- Problems (CSPs)
for quite some time. A recent increase of interest in this subject,
concentrating on backtrack search, can be attributed to its use as the
control strategy in PROLOG, and in Truth-Maintenance-Systems (TMS).
The terms ``intelligent backtracking'', ``selective backtracking'',
and ``dependency- directed backtracking'' describe various efforts for
producing improved dialects of backtrack search in these systems. In
this talk I will review the common features of these attempts and will
present two schemes for enhancing backtrack search in solving CSPs.
The first scheme, a version of "look-back", guides the decision of
what to do in dead-end situations. Specifically, we concentrate on
the idea of constraint recording, namely, analyzing and storing the
reasons for the dead-ends, and using them to guide future decisions,
so that the same conflicts will not arise again. We view constraint
recording as a process of learning, and examine several possible
learning schemes studying the tradeoffs between the amount of learning
and the improvement in search efficiency.
The second improvement scheme exploits the fact that CSPs whose
constraint graph is a tree can be solved easily, i.e., in linear time.
This leads to the following observation: If, in the course of a
backtrack search, the subgraph resulting from removing all nodes
corresponding to the instantiated variables is a tree, then the rest
of the search can be completed in linear time. Consequently, the aim
of ordering the variables should be to instantiate as quickly as
possible a set of variables that cut all cycles in the
constraint-graph (cycle-cutset). This use of cycle-cutsets can be
incorporated in any given "intelligent" backtrack and is guaranteed to
improve it (subject to minor qualifications).
The performance of these two schemes is evaluated both theoretically
and experimentally using randomly generated problems as well as
several "classical" problems described in the literature.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
ALSO: NOTE DAY CHANGE!!! (Tuesday -- this week only)
------------------------------
Date: 17 Sep 86 14:53:24 EDT
From: Barbara.Grandillo@n.sp.cs.cmu.edu
Subject: Seminar - Minisupercomputers and AI Machines (CMU)
Special Computer Science Seminar
Speaker: Professor Kai Hwang
University of Southern California
Title: Design Issues of Minisupercomputers and AI Machines
Date: Monday, September 22, 1986
Time: 12:00 noon
Place: Wean Hall 4605
In this seminar, Dr. Hwang will address the fundamental issues in
designing efficient multiprocessor/multicomputer minisupercomputers or
AI machines. The talk covers the systems architectural choices,
interprocessor communication mechanisms, resource allocation methods,
I/O and OS functions, mapping of parallel algorithms, and the creation
of parallel programming environment for these machines.
These design issues and their possible solutions are drawn from the
following commercial or exploratory systems: Alliant FX/8, FPS
T-Series and M64 Series,Flex/32, Encore Multimax, Flex/32, Elxsi 6400,
Sequent 8000, Connection Machine, BBN Butterfly, FAIM-1, Alice, Dado,
Soar, and Redfiflow, etc.
Dr. Hwang will also assess the technological basis and future trends in
low-cost supercomputing and AI processing.
------------------------------
Date: 19 Sep 86 0845 PDT
From: Rosemary Napier <RFN@SAIL.STANFORD.EDU>
Subject: Seminar - Equal Opportunity Interactive Systems (SU)
Computer Science Colloquium
Tuesday, October 7, 1986, 4:15PM, Terman Auditorium
"Equal Opportunity Interactive Systems and Innovative Design"
Harold Thimbleby
Dept. of Computer Science
University of York
Heslington, York
United Kingdom YO1 5DD
Most interactive systems distinguish between the input and output
of information. Equal opportunity is a design heuristic that
discards these distinctions; it was inspired by polymodality
in logic programming and a well-known problem solving heuristic.
The seminar makes the case for equal opportunity, and shows how
several user engineering principles, techniques and systems can
be reappraised under equal opportunity.
By way of illustration, equal opportunity is used to guide the
design of a calculator and spreadsheet. The resulting systems
have declarative user interfaces and are arguably easier to
use despite complex operational models.
About the speaker: Harold Thimbleby did his doctoral research in
user interface design. He joined the Computer Science department
at York in 1982 and is currently on sabbatical at the Knowledge
Sciences Institute, Calgary. He is currently writing a book on
the application of formal methods as heuristics for user interface
design.
------------------------------
Date: 8 Sep 86 23:50:47 EDT
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Seminar - AI in Communication Networks (Rutgers)
The first speaker of this year's Machine Learning Seminar series at Rutgers
will be Andrew Jennings of Telecom Australia, speaking on "AI in
Communication Networks". Dr. Andrews will speak in Hill-423 at 11 AM on
THURSDAY, September 18th (NB: this is NOT the standard day for the ML
series). The abstract follows:
Andrew Jennings
(Arpanet address: munnari!trlamct.oz!andrew@seismo.CSS.GOV)
Telecom Australia
AI in Communication Networks
Expert systems are gaining wide application in communication
systems, especially in the areas of maintenance, design and planning. Where
there are large bodies of existing expertise, expert systems are a useful
programming technology for capturing and making use of that expertise.
However will AI techniques be limited to retrospective capturing of
expertise or can they be of use for next generation communication systems?
This talk will present several projects that aim to make use of AI
techniques in next-generation communication networks. An important aspect
of these systems is their ability to learn from experience.
This talk will discuss some of the difficulties in developing
learning in practical problem domains, and the value of addressing these
difficulties now. In particular the problems of learning in intractable
problem domains is of great importance for these problems and some ongoing
work on this problem will be presented. The projects discussed include a
system for capacity assignment in networks, a project to develop AI systems
for routing in fast packet networks and a system for VLSI design from a high
level specification.
------------------------------
Date: 9 Sep 86 12:43:20 EDT
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Seminar - Goal Integration in Heuristic Algorithm Design
(Rutgers)
Next week, on Tuesday, September 16th in Hill 423 at 11 AM, Jack
Mostow will give a talk based on his work with Kerstin Voigt, entitled
"A Case Study of Goal Integration in Heuristic Algorithm Design".
This a joint ML/III seminar, and is a dry run for a talk being given at the
Knowledge Compilation Workshop. There's no paper for the talk, but Jack
recommends his AAAI86 article with Bill Swartout as good background reading.
The abstract follows:
Jack Mostow
Rutgers University
(Arpanet address: MOSTOW@RED.RUTGERS.EDU)
A Case Study of Goal Integration in Heuristic Algorithm Design:
A Transformational Rederivation of MYCIN's Therapy Selection Algorithm
An important but little-studied aspect of compiling knowledge into
efficient procedures has to do with integrating multiple, sometimes
conflicting goals expressed as part of that knowledge. We are
developing an artificial intelligence model of heuristic algorithm
design that makes explicit the interactions among multiple goals. The
model will represent intermediate states and goals in the design
process, transformations that get from one state to the next, and
control mechanisms that govern the selection of which transformation
to apply next. It will explicitly model the multiple goals that
motivate and are affected by each design choice.
We are currently testing and refining the model by using it to explain
the design of the algorithm used for therapy selection in the medical
expert system MYCIN. Previously we analyzed how this algorithm
derives from the informal specification "Find the set of drugs that
best satisfies the medical goals of maximizing effectiveness,
minimizing number of drugs, giving priority to treating likelier
organisms, [etcetera]." The reformulation and integration of these
goals is discussed in Mostow & Swartout's AAAI86 paper. Doctoral
student Kerstin Voigt is implementing a complete derivation that will
address additional goals important in the design of the algorithm,
such as efficient use of time, space, and experts.
------------------------------
Date: Mon 18 Aug 86 16:28:02-CDT
From: Rajini <Rajini%ti-csl.csnet@CSNET-RELAY.ARPA>
Subject: Seminar - Long-Term Planning Systems (TI)
Dr. Jim Hendler, Assistant Professor at Univ of Maryland, is a giving a
special seminar at 10:00 am on August 28th. Abstract of his talk follows.
It will be held in Conference room #2, Computer Science Center, Texas
Instruments, Dallas.
--Rajini
rajini@ti-csl
(214) 995-0779
Long-term planning systems
James Hendler
Computer Science Dept.
University of Maryland
College Park, Md. 20903
Most present day planning systems work in domains where a single goal is
planned for a single user. Further, the only object changing the world is
the planner itself. The few systems that go beyond this, for example Vere's
DEVISER system, tend to work in domains where the world, although changing,
behaves according to a set of well-defined rules. In this talk we describe
on-going research directed at extending planning systems to function in the
dynamic environments necessary for such tasks as job-shop scheduling,
process control, and autonomous vehicle missions.
The talk starts by describing the inadequacies of present-day systems for
working in such tasks. We focus on two, necessity of a static domain and
inability to handle large numbers of interacting goals, and show some of the
extensions needed to handle these systems. We describe an extension to
marker-passing, a parallel, spreading activation system, which can be used
for handling the goal interaction problems, and we discuss representational
issues necessary to handling dynamic worlds. We end by describing work on
a system which is being implemented to deal with these problems.
------------------------------
Date: Tue 19 Aug 86 19:55:33-PDT
From: Margaret Olender <OLENDER@SRI-WARBUCKS.ARPA>
Subject: Seminar - Learning by Understanding Analogies (SRI)
Russell Greiner, Toronto, will be guest speaker at the RR Group's
PlanLunch (August 20, EJ228, 11:00am).
LEARNING BY UNDERSTANDING ANALOGIES
This research describes a method for learning by analogy---i.e., for
proposing new conjectures about a target analogue based on facts known
about a source analogue. After formally defining this process, we
present heuristics which efficiently guide it to the conjectures which
can help solve a given problem. These rules are based on the view
that a useful analogy is one which provides the information needed to
solve the problem, and no more. Experimental data confirms the
effectiveness of this approach.
------------------------------
Date: Wed 20 Aug 86 16:02:46-PDT
From: Amy Lansky <LANSKY@SRI-WARBUCKS.ARPA>
Subject: Seminar - Belief Revision (SRI)
IS BELIEF REVISION HARDER THAN YOU THOUGHT?
Marianne Winslett (WINSLETT@SCORE)
Stanford University, Computer Science Department
11:00 AM, MONDAY, Aug. 25
SRI International, Building E, Room EJ228
Suppose one wishes to construct, use, and maintain a database of
knowledge about the real world, even though the facts about that world
are only partially known. In the AI domain, this problem arises when
an agent has a base set of extensional beliefs that reflect partial
knowledge about the world, and then tries to incorporate new, possibly
contradictory extensional knowledge into the old set of beliefs. We
choose to represent such an extensional knowledge base as a logical
theory, and view the models of the theory as possible states of the
world that are consistent with the agent's extensional beliefs.
How can new information be incorporated into the extensional knowledge
base? For example, given the new information that "b or c is true,"
how can we get rid of all outdated information about b and c, add the
new information, and yet in the process not disturb any other
extensional information in the extensional knowledge base? The burden
may be placed on the user or other omniscient authority to determine
exactly which changes in the theory will bring about the desired set
of models. But what's really needed is a way to specify the update
intensionally, by stating some well-formed formula that the state of
the world is now known to satisfy and letting internal knowledge base
mechanisms automatically figure out how to accomplish that update. In
this talk we present semantics and algorithms for an operation to add
new information to extensional knowledge bases, and demonstrate that
this action of extensional belief revision is separate from, and
in practice must occur prior to, the traditional belief revision
processes associated with truth maintenance systems.
------------------------------
Date: Wed 3 Sep 86 14:51:36-PDT
From: Amy Lansky <LANSKY@SRI-WARBUCKS.ARPA>
Subject: Seminar - Factorization in Experiment Generation (SRI)
FACTORIZATION IN EXPERIMENT GENERATION
Devika Subramanian
Stanford University, Computer Science Department
11:00 AM, MONDAY, September 8
SRI International, Building E, Room EJ228
Experiment Generation is an important part of incremental concept
learning. One basic function of experimentation is to gather data
to refine an existing space of hypotheses. In this talk, we examine
the class of experiments that accomplish this, called discrimination
experiments, and propose factoring as a technique for generating
them efficiently.
------------------------------
End of AIList Digest
********************
∂21-Sep-86 0317 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #194
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 21 Sep 86 03:17:07 PDT
Date: Sat 20 Sep 1986 23:17-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #194
To: AIList@SRI-STRIPE
AIList Digest Sunday, 21 Sep 1986 Volume 4 : Issue 194
Today's Topics:
Seminars (Past) - Rule Induction in Computer Chess (ACM LA Chapter) &
Mechanization of Geometry (SU) &
Automatic Algorithm Designer (CMU) &
Representations and Checkerboards (CMU) &
Deriving Problem Reduction Operators (Rutgers) &
Evolution of Automata (SRI) &
Active Reduction of Uncertainty (UPenn) &
Rational Conservatism and the Will to Believe (CMU) &
BiggerTalk: An Object-Oriented Extension to Prolog (UTexas)
----------------------------------------------------------------------
Date: 21 Aug 86 12:01:50 PDT (Thu)
From: ledoux@aerospace.ARPA
Subject: Seminar - Rule Induction in Computer Chess (ACM LA Chapter)
ACM LOS ANGELES CHAPTER DINNER MEETING
WEDNESDAY, 3 SEPTEMBER 1986
STRUCTURED EXPERT RULE INDUCTION
Expert Systems and Computer Chess
Speaker: Dr. Alen Shapiro
One of the major problems with expert systems is "the knowledge
engineering bottleneck." This occurs when development is delayed
because specifications are unavailable and either the expert system
developers need time to learn the problem, or else the domain experts
who already know the problem need time to learn how to use the often
opaque expert system development languages. A promising approach to
overcoming the bottleneck is to build tools that automatically extract
knowledge from the domain experts. This talk presents an overview of
inductive knowledge acquisition and the results of experiments in
inductive rule generation in the domain of chess endgames. The system
that will be described was able to generate humanly-understandable rules
and to play correct chess endgames. This research has significant
implications for the design of expert system languages and rule
induction programs. The talk is also an interesting look into the world
of computer chess.
Dr. Shapiro, a Fellow of the Turing Institute since its inception in
1983, received his Ph.D. in Machine Intelligence from the University of
Edinburgh in 1983. From 1979 to 1986 he was associated with Intelligent
Terminals, Ltd., and a member of the Rulemaster and Expert-Ease design
teams. He has served as Visiting Professor at the University of
Illinois on two occasions. His publications include articles on pattern
recognition, automatic induction of chess classification rules, and
(with David Michie), "A Self-Commenting Facility for Inductively
Synthesized Endgame Expertise."
In 1986 Dr. Shapiro joined the New Technology Department at Citicorp-TTI
in Santa Monica as a Computer Scientist concerned with the development
of inductive knowledge engineering tools for the banking industry.
PLACE
Amfac Hotel
8601 Lincoln Blvd.
corner of Lincoln & Manchester
Westchester, California
8:00 p.m.
------------------------------
Date: Mon, 18 Aug 86 11:25:38 PDT
From: coraki!pratt@Sun.COM (Vaughan Pratt)
Subject: Seminar - Mechanization of Geometry (SU)
SPEAKER Professor Wu Wen-tsun
TITLE Mechanization of Geometry
DATE Thursday, August 21
TIME 2:00 pm
PLACE Margaret Jacks Hall, room 352
A mechanical method of geometry based on Ritt's characteristic set
theory will be described which has a variety of applications including
mechanical geometry theorem proving in particular. The method has been
implemented on computers by several researchers and turns out to be
efficient for many applications.
BACKGROUND
Professor Wu received his doctorate in France in the 1950's, and was a
member of the Bourbaki group. In the first National Science and
Technology Awards in China in 1956, Professor Wu was one of three
people awarded a first prize for their contributions to science and
technology. He is currently the president of the Chinese Mathematical
Society.
In 1977, Wu extended classical algebraic geometry work of Ritt to an
algorithm for proving theorems of elementary geometry. The method has
recently become well-known in the Automated Theorem Proving community;
at the University of Texas it has been applied it to the machine proof
of more than 300 theorems of Euclidean and non-Euclidean geometry.
------------------------------
Date: 5 September 1986 1527-EDT
From: Betsy Herk@A.CS.CMU.EDU
Subject: Seminar - Automatic Algorithm Designer (CMU)
Speaker: David Steier
Date: Friday, Sept. 12
Place: 5409 Wean Hall
Time: 3:30 p.m.
Title: Integrating multiple sources of knowledge in an
automatic algorithm designer
One of the reasons that designing algorithms is so difficult is the
large amount of knowledge needed to guide the design process. In this
proposal, I identify nine sources of such knowledge within four
general areas: general problem-solving, algorithm design and
implementation techniques, knowledge of the application domain,
and methods for learning from experience. To understand how
knowledge from these sources can be represented and integrated, I
propose to build a system that automatically designs algorithms.
An implementation of the system, Designer-Soar, uses several
of the knowledge sources described in the proposal to design several
very simple algorithms. The goal of the thesis is to extend
Designer-Soar to design moderately complex algorithms in a domain
such as graph theory or computational geometry.
------------------------------
Date: 10 September 1986 1019-EDT
From: Elaine Atkinson@A.CS.CMU.EDU
Subject: Seminar - Representations and Checkerboards (CMU)
SPEAKER: Craig Kaplan, CMU, Psychology Department
TITLE: "Representations and Checkerboards"
DATE: Thursday, September 11
TIME: 4:00 p.m.
PLACE: Adamson Wing, BH
Given the right representation, tricky "insight" problems
often become trivial to solve. How do people arrive at the right
representations? What factors affect people's ability to shift
representations, and how can understanding these factors help us
understand why insight problems are so difficult?
Evidence from studies using the Mutilated Checkerboard
Problem points to Heuristic Search as a powerful way of addressing
these questions. Specifically, it suggest that the quality of
the match between people's readily available search heuristics
and problem characteristics is a major determinant of problem
difficulty for some problems.
------------------------------
Date: 11 Sep 86 20:01:20 EDT
From: RIDDLE@RED.RUTGERS.EDU
Subject: Seminar - Deriving Problem Reduction Operators (Rutgers)
I am giving a practice talk of a talk I will be giving in a few weeks.
It is at 1 pm in 423 on Monday the 15th.
Everyone is invited and all comments are welcome.
The abstract follows.
This research deals with automatically shifting from one problem
representation to another representation which is more efficient, with
respect to a given problem solving method, for this problem class. I
attempt to discover general purpose primitive representation shifts
and techniques for automating them. To achieve this goal, I am
defining and automating the primitive representation shifts explored
by Amarel in the Missionaries & Cannibals problem @cite(amarel1).
The techniques for shifting representations which I have already
defined are: compiling constraints, removing irrelevant information,
removing redundant information, deriving macro-operators, deriving
problem reduction operators, and deriving macro-objects. In this
paper, I will concentrate on the technique for deriving problem
reduction operators (i.e., critical reduction) and a method for
automating this technique (i.e., invariant reduction). A set of
sufficient conditions for the applicability of this technique over a
problem class is discussed; the proofs appear in a forthcoming
Rutgers technical report.
------------------------------
Date: Wed 10 Sep 86 15:00:22-PDT
From: Amy Lansky <LANSKY@SRI-WARBUCKS.ARPA>
Subject: Seminar - Evolution of Automata (SRI)
THE EVOLUTION OF COMPUTATIONAL CAPABILITIES
IN POPULATIONS OF COMPETING AUTOMATA
Aviv Bergman (BERGMAN@SRI-AI)
SRI International
and
Michel Kerszberg
IFF der KFA Julich, W.-Germany
10:30 AM, MONDAY, September 15
SRI International, Building E, Room EJ228
The diversity of the living world has been shaped, it is believed, by
Darwinian selection acting on random mutations. In the present work,
we study the emergence of nontrivial computational capabilities in
automata competing against each other in an environment where
possession of such capabilities is an advantage. The automata are
simple cellular computers with a certain number of parameters -
characterizing the "Statistical Distribution" of the connections -
initially set at random. Each generation of machines is subjected to a
test necessitating some computational task to be performed, e.g
recognize whether two patterns presented are or are not translated
versions of each other. "Adaptive Selection" is used during the task
in order to "Eliminate" redundant connections. According to its
grade, each machine either dies or "reproduces", i.e. it creates an
additional machine with parameters almost similar to its own. The
population, it turns out, quickly learns to perform certain tests.
When the successful automata are "autopsied", it appears that they do
not all complete the task in the same way; certain groups of cells are
more active then others, and certain connections have grown or decayed
preferentially, but these features may vary from individual to
individual. We try to draw some general conclusions regarding the
design of artificial intelligence systems, and the understanding of
biological computation. We also contrast this approach with the usual
Monte-Carlo procedure.
------------------------------
Date: Wed, 13 Aug 86 08:51 EDT
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Active Reduction of Uncertainty (UPenn)
Active Reduction of Uncertainty in Multi-sensor Systems
Ph.D. Thesis Proposal
Greg Hager
(greg@upenn-grasp)
General Robots and Active Sensory Perception Laboratory
University of Pennsylvania
Department of Computer and Information Sciences
Philadelphia, PA 19104
10:00 AM, August 15, 1986
Room 554 Moore
If robots are to perform tasks in unconstrained environments, they will have
to rely on sensor information to make decisions. In general, sensor
information has some uncertainty associated with it. The uncertainty can be
conceptually divided into two types: statistical uncertainty due to signal
noise, and incompleteness of information due to limitations of sensor scope.
Inevitably, the information needed for proper action will be uncertain. In
these cases, the robot will need to take action explicitly devoted to
reducing uncertainty.
The problem of reducing uncertainty can be studied within the theoretical
framework of team decision theory. Team decision theory considers a number
of decision makers observing the world via information structures, and
taking action dictated by decision rules. Decision rules are evaluated
relative to team and individual utility considerations. In this vocabulary,
sensors are considered as controllable information structures whose behavior
is determined by individual and group utilities. For the problem of
reducing uncertainty, these utilities are based on the information expected
as the result of taking action.
In general, a robot does not only consider direct sensor observations, but
also evaluates and combines that data over time relative to some model of
the observed environment. In this proposal, information aggregation is
modeled via belief systems as studied in philosophy. Reducing uncertainty
corresponds to driving the belief system into one of a set of information
states. Within this context, the issues that will be addressed are the
specification of utilities in terms of belief states, the organization of a
sensor system, and the evaluation of decision rules. These questions will
first be studied through theory and simulation, and finally applied to an
existing multi-sensor system.
Advisor: Dr. Max Mintz
Committee: Dr. Ruzena Bajcsy (Chairperson)
Dr. Zolton Domotor (Philosophy Dept.)
Dr. Richard Paul
Dr. Stanley Rosenschein (SRI International and CSLI)
------------------------------
Date: 10 Sep 1986 0848-EDT
From: Lydia Defilippo <DEFILIPPO@C.CS.CMU.EDU>
Subject: Seminar - Rational Conservatism and the Will to Believe (CMU)
CMU
PHILOSOPHY COLLOQUIUM
JON DOYLE
RATIONAL CONSERVATISM AND THE WILL TO BELIEVE
DATE: MONDAY SEPTEMBER 15
TIME: 4:OO P.M.
PLACE: PORTER HALL, RM 223d
* Much of the reasoning automated in artificial intelligence is either
mindless deductive inference or is intentionally non-deductive. The common
explanations of these techniques, when given, are not very satisfactory, for
the real explanations involve the notion of bounded rationality, while over
time the notion of rationality has been largely dropped from the vocabulary of
artificial intelligence. We present the notion of rational self-government, in
which the agent rationally guides its own limited reasoning to whatever degree
is possible, via the examples of rational conservatism and rationally adopted
assumptions. These ideas offer improvements on the practice of mindless
deductive inference and explantions of some of the usual non-deductive
inferences.
------------------------------
Date: Mon 15 Sep 86 10:35:02-CDT
From: ICS.BROWNE@R20.UTEXAS.EDU
Subject: Seminar - BiggerTalk: An Object-Oriented Extension to Prolog (UTexas)
Object-Oriented Programming Meeting
Friday, September 19
2:00-3:00 p.m.
Taylor 3.128
BiggerTalk:
An Object-Oriented Extension to Prolog
Speaker: Eric Gullichsen
MCC Software Technology Program
BiggerTalk is a system of Prolog routines which provide a capability for
object-oriented programming in Prolog. When compiled into a standard
Prolog environment, the BiggerTalk system permits programming in the
object-oriented style of message passing between objects, themselves
defined as components of a poset (the 'inheritance structure')
created through other BiggerTalk commands. Multiple inheritance of
methods and instance variables is provided dynamically. The full functional
capability of Prolog is retained, and Prolog predicates can be invoked
from within BiggerTalk methods.
A provision exists for storage of BiggerTalk objects in the MCC-STP
Object Server, a shared permanent object repository. The common external
form for objects in the Server permits (restricted) sharing of objects
between BiggerTalk and Zetalisp Flavors, the two languages currently
supported by the Server. Concurrent access to permanent objects is
mediated by the server.
This talk will discuss a number of theoretical and pragmatic issues of
concern to BiggerTalk and its interface to the Object Server. Some
acquaintance with the concepts of logic programming and object-oriented
programming will be assumed.
------------------------------
End of AIList Digest
********************
∂25-Sep-86 0011 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #195
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 25 Sep 86 00:11:22 PDT
Date: Wed 24 Sep 1986 21:20-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #195
To: AIList@SRI-STRIPE
AIList Digest Thursday, 25 Sep 1986 Volume 4 : Issue 195
Today's Topics:
Queries - Public-Domain Ops5 & XLisp & Lsmalltalk &
Kyoto Common Lisp & LISP-to-FORTRAN Conversion &
Cognitive Science Schools,
AI Tools - OPS5 & OPSx & UNIX Tools,
Expert Systems - Literature Resources & Implementation Styles
----------------------------------------------------------------------
Date: 20 Sep 86 15:32:14 GMT
From: ritcv!eer@ROCHESTER.ARPA (Ed Reed)
Subject: Public domain Ops5 in any language
I'm looking for one of the versions of OPS5 in lisp (or ?)
that's in the public domain. I've heard that there are pd versions
running around, but haven't found any here, yet.
If in lisp (as I expect) I can use FranzLisp, DecCommonLisp, and
xlisp, and maybe InterLisp on a Xerox Dlion.
Thanks for the help..
Ed Reed
Rochester Inst. Technology,
Rochester, NY
....seismo!rochester!ritcv
Delphi: eertest
GEnie: SQA.INC
------------------------------
Date: 19 Sep 1986 21:30-EDT
From: cross@wpafb-afita
Subject: xlisp query
Would appreciate a pointer to where I could download the source code for
xlisp 1.6 and any demonstratable programs written in xlisp. I'm aware
of the stuff published in AI Expert and have downloaded it, but cannot
find the source code. Thanks in advance.
Steve Cross
------------------------------
Date: 24 Sep 86 03:50:21 GMT
From: booter@lll-crg.arpa (Elaine Richards)
Subject: Lsmalltalk and XLisp
I spaced out on my friend's login name. He is at Cal State University
Hayward, which has no news feed. He is a fanatic for smalltalk and
LISP and I hope you folks out there can assist. Please no flamage, this
guy is not a regular netter and he really would love some contacts.
Here is what he asked me to post.
*****************************************************
* e-mail responses to *
* {seismo,ihnp4,pyramid}!lll-crg!csuh!jeff *
* -or- *
* hplabs!qantel!csuh!jeff *
*****************************************************
#1
To all people,places, and things who possess some
knowledge about Lsmalltalk:
I am just getting into Lsmalltalk and I am interested
in communicating with others who have some experience with it. I
am using Smalltalk 'blue' as my map of the Lsmalltalk system; can
anyone suggest a way around class-variables and methods ( is the
class Smalltalk the only way?). Is there anyone who has done some
interesting applications they would like to share?
jeff
#2
The young and struggling C.S. department of the
Calif. State University of Hayward would like to get to Xlisp.
If somebody out there knows were we can get it, could you please
pass that information along?
jeff
------------------------------
Date: 23 Sep 86 01:00:29 GMT
From: zeus!stiber@locus.ucla.edu (Michael D Stiber)
Subject: Kyoto Common Lisp
Does anyone have experience using this Lisp, or have any information
about it. I am specifically interested in comments on Ibuki Lisp, an
implementation of Kyoto Common Lisp that runs on the IBM RT.
Michael Stiber
ARPANET: stiber@ucla-locus.arpa
USENET: ...{ucbvax,ihpn4}!ucla-cs!stiber
Born too late to be a yuppy -- and proud of it!
------------------------------
Date: Wed, 24 Sep 86 08:32:13 edt
From: jlynch@nswc-wo.ARPA
Subject: LISP Conversion
I am gathering information concerning the conversion or
translation of programs written in LISP to FORTRAN. Would
appreciate comments from anyone who has tried to do this and the
likelihood of success. Interested in both manual methods as well
as conversion routines or programs. I will summarize replies
for the AILIST. Thanks, Jim Lynch (jlynch@nswc-wo.arpa)
------------------------------
Date: Mon, 22 Sep 86 11:03:20 -0500
From: schwamb@mitre.ARPA
Subject: Cognitive Science Schools
Well, now that some folks have commented on the best AI schools in
the country, could we also hear about the best Cognitive Science
programs? Cog Sci has been providing a lot of fuel for thought to
the AI community and I'd like to know where one might specialize
in this.
Thanks, Karl (schwamb@mitre)
------------------------------
Date: 18 Sep 86 13:38:33 GMT
From: gilbh%cunyvm.bitnet@ucbvax.Berkeley.EDU
Subject: Re: AI Grad Schools
One might consider CUNY (City University of New York) too.
------------------------------
Date: Sat, 20 Sep 86 07:39:34 MDT
From: halff@utah-cs.arpa (Henry M. Halff)
Subject: Re: Any OPS5 in PC ?
In article <8609181610.AA08808@ucbvax.Berkeley.EDU>,
EDMUNDSY%northeastern.edu@RELAY.CS.NET writes:
> Does anyone know whether there is any OPS5 software package availiable in PC?
> I would like to know where I can find it. Thanks!!!
Contact
Computer*Thought
1721 West Plano Parkway
Suite 125
Plano, TX 75075
214/424-3511
ctvax!mark.UUCP
Disclaimer: I know people at Computer*Thought, but I don't know anything about
their OPS-5. I don't know how well it works. I don't even know if I would
know how to tell how well it works.
------------------------------
Date: Fri, 19 Sep 86 06:15:37 cdt
From: mlw@ncsc.ARPA (Williams)
Subject: OPSx for PCs
For parties seeking OPS5 on PCs...an implementation of OPS/83 is being
marketed by Production Systems Technologies, Inc.
642 Gettyburg Street
Pittsburgh, PA 15206
(412)362-3117
I have no comparison information relating OPS5 to OPS83 other than the
fact that OPS83 is compiled and is supposed to provide better performance
in production on micros than is possible with OPS5. I'd be glad to see
more information on the topic in this forum.
Usual disclaimers...
Mark L. Williams
(mlw @ncsc.arpa)
------------------------------
Date: 18 Sep 86 19:21:50 GMT
From: ssc-vax!bcsaic!pamp@uw-beaver.arpa (wagener)
Subject: Re: Info on UNIX based AI Tools/applications (2nd req)
In article <1657@ptsfa.UUCP> jeg@ptsfa.UUCP (John Girard) writes:
>
>This is a second request for information on Artificial Intelligence
>tools and applications available in the unix environment.
>
> Expert System Shells
> Working AI applications (academic and commercial)
I can recomend at least one good comprehensive listing of
tools,languages and companies;
The International Directory of Artificial Intelligence
Companies,2nd edition,1986,Artificial
Intelligence Software S.R.L.,Via A. Mario,12/A,
45100 ROVIGO, Italy. Della Jane Hallpike,ed.
Ph.(0425)27151
It mainly looks at the companies, but it does have descriptions
of their products.
Also look into D.A.Waterman's book, A guide to expert systems;
Addison-Wesley Pub.Co.,1985.
I also recomend you check out the Expert system Magazines;
1) Expert Systems - The Ineternational Journal of
Knowledge Engineering;Learned Information Ltd.,
(This is an English Publication. It's US office
address is;
Learned information Co.
143 Old Marlton Pike
Medfor,NJ 08055
PH.(609) 654-6266
Subscription Price: $79
2) Expert Systems User; Expert Systems User Ltd.
Cromwell House,
20 Bride Lane
London EC4 8DX
PH.01-353 7400
Telex: 23862
Subscription Price: $210
3) IEEE Expert - Intelligent Systems and their Applications
IEEE Computer Society
IEEE Headquartes
345 East 47th Street
New York,NY 10017
IEEE Computer Society West Coast Office
10662 Los Vaqueros Circle
Los Alamitos, CA 90720
Subscription Price (IEEE Members): $12/yr
4) AI Expert
AI Expert
P.O.Box 10952
Palo Alto, CA 94303-0968
Subscription Price: $39/yr $69/2yr $99/3yr
There are some good product description sections and articles
in these (especially the British ones which are the older
publications). There are quite a number of systems out there.
Good luck.
Pam Pincha-Wagener
------------------------------
Date: 20 Sep 86 15:44:00 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: queries about expert systems
Date: Thu, 18 Sep 1986 17:10 EDT
From: LIN@XX.LCS.MIT.EDU
1. Production systems are the implementation of many expert systems.
In what other forms are "expert systems" implemented?
[I use the term "expert system" to describe the codification of any
process that people use to reason, plan, or make decisions as a set of
computer rules, involving a detailed description of the precise
thought processes used. If you have a better description, please
share it.]
``Expert System'' denotes a level of performance, not a technology.
The particularly important aspirations are generality and robustness.
Every program strives for some degree of generality and robustness, of
course, but calling a program an expert system means it's supposed to
be able to do the right thing even in situations that haven't been
explicitly anticipated, where ``the right thing'' might just be to
gracefully say ``I dunno'' when, indeed, the program doesn't have the
knowledge needed to solve the problem posed.
Production systems, or, more accurately, programs that work by running
a simple interpreter over a body of knowledge represented as IF-THEN
rules, ease the construction of simple expert systems because it's
possible to encode the knowledge without having to commit to a
particular order or context of using that knowledge. The interpreter
determines what rule to apply next at runtime, and so long as you
don't include contradictory rules or assume a particular order of
application, such systems are easy to construct and work pretty well,
i.e. can be general (solve a wide variety of problem instances) and
robust (degrade gracefully by saying ``i dunno'' (no rules, or only
very general rules apply) in unusual situations, rather than trapping
out with an error).
That may not have seemed like an answer to question #1, so let me
return to it explicitly. Production systems are not the only
technology for building expert systems, but pattern-directed
invocation is a theme common to all expert systems, whatever
technology is used. Let me explain. Another popular technology for
expert systems (in the medical domain, especially) might be called
Frames and Demons. Facts are organized in a specialization hierarchy,
and attached to each fact may be a bunch of procedures (demons) that
are run when the fact is asserted, or denied, when the program needs
to figure out whether the fact is true or not, etc. Running a demon
may trigger other demons, or add new facts, or new demons, and so the
system grinds away. The underlying principle is the same as in
production systems: there is a large body of domain specific
knowledge, plus a simple interpreter that makes no initial commitment
to the order or context in which the facts are going to be used. The
name of the game is pattern-directed invocation: the next action to
take is selected from among the ``rules'' or ``methods'' or ``demons''
that are relevant to the current situation. This characteristic is
not unique to expert systems, but (I think) every program that has
ever been called an expert system has this characteristic in common,
and moreover that it was central to its behavior.
2. A production system is in essence a set of rules that state that
"IF X occurs, THEN take action Y." System designers must anticipate
the set of "X" that can occur. What if something happens that is not
anticipated in the specified set of "X"? I assert that the most
common result in such cases is that nothing happens. Am I right,
wrong, or off the map?
In most implementations of production systems, if the current
situation is such that no rules match it, nothing happens (maybe the
program prints out the atom 'DONE :-). If the system is working in a
goal-directed fashion (e.g. it's trying to find out under what
circumstances it can take action Y (action Y might be "conclude that Z
has occurred")) and there aren't any rules that tell it anything about
Y, again, nothing happens: it can't conclude Z. In practice, there
are always very general rules that apply when nothing else does.
Being general, they're probably not very helpful: "IF () THEN SAY
Take-Two-Aspirin-And-Call-Me-In-The-Morning." The same applies to any
brand of pattern-directed invocation.
However, it's getting on the hairy edge of matters to say "System
designers must anticipate the set of X that can occur." The reason is
that productions (methods, demons) are supposed to be modular;
independent of other productions; typically written to trigger on only
a handful of the possibly thousands of features of the current
situation. So in fact I don't need to anticipate all the situations
that occur, but rather ``just'' figure out all the relevant features
of the space of situations, and then write rules that deal with
certain combinations of those features. It's like a grammar: I don't
have to anticipate every valid sentence, except in the sense that I
need to figure out what all the word categories are and what local
combinations of words are legal.
Now, to hone your observation a bit, I suggest focusing on the notion
of ``figuring out all the relevant features of the space of
situations.'' That's what's difficult. Experts (including
carbon-based ones) make mistakes when they ignore (or are unaware of)
features of the situation that modify or overrule the conclusions made
from other features. The fundamental problem in building an expert
system that deals with the real world is not entirely in cramming
enough of the right rules into it (although that's hard), it's
encoding all the exceptions, or, more to the point, remembering to
include in the program's model of the world all the features that
might be relevant to producing exceptions.
End of overly long flame.
Walter Hamscher
P.S. I am not an AI guru, rather, a mere neophyte disciple of the bona
fide gurus on my thesis committee.
------------------------------
Date: Tue Sep 23 11:33:13 GMT+1:00 1986
From: mcvax!lasso!ralph@seismo.CSS.GOV (Ralph P. Sobek)
Subject: Re: queries about expert systems (Vol 4, no. 187)
Herb,
>1. Production systems are the implementation of many expert systems.
>In what other forms are "expert systems" implemented?
I recommend the book "A Guide to Expert Systems," by Donald
Waterman. It describes many expert systems, which fall more or less
into your definition, and in what they are implemented. Production
Systems (PSs) can basically be divided into forward-chaining (R1/XCON) and
backward-chaining (EMYCIN); mixed systems which do both exist. Other
representations include frame-based (SRL), semantic nets (KAS), object-
oriented, and logic-based. The representation used often depends on what
is available in the underlying Expert System Tool. Tools now exist which
provide an intergrated package of representation structures for the expert
system builder to use, e.g., KEE and LOOPS. Expert systems are also written
in standard procedural languages such as Lisp, C, Pascal, and even Fortran.
>2. A production system is in essence a set of rules that state that
>"IF X occurs, THEN take action Y." System designers must anticipate
>the set of "X" that can occur. What if something happens that is not
>anticipated in the specified set of "X"? I assert that the most
>common result in such cases is that nothing happens.
In both forward-chaining and backward-chaining PSs nothing happens.
If the PS produces "X" then we can verify if "X" is never used. In the
general case, if "X" comes from some arbitrary source there is no
guarantee that the PS (or any other system) will even see the datum.
Ralph P. Sobek
UUCP: mcvax!inria!lasso!ralph@SEISMO.CSS.GOV
ARPA: sobek@ucbernie.Berkeley.EDU (automatic forwarding)
BITNET: SOBEK@FRMOP11
------------------------------
End of AIList Digest
********************
∂25-Sep-86 0243 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #196
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 25 Sep 86 02:43:15 PDT
Date: Wed 24 Sep 1986 21:34-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #196
To: AIList@SRI-STRIPE
AIList Digest Thursday, 25 Sep 1986 Volume 4 : Issue 196
Today's Topics:
Linguistics - NL Generation,
Logic - TMS, DDB and Infinite Loops,
AI Tools - Turbo Prolog & Xerox vs Symbolics,
Philosophy - Associations & Intelligent Machines
----------------------------------------------------------------------
Date: Mon, 22 Sep 86 10:31:23 EDT
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Reply-to: rapaport@sunybcs.UUCP (William J. Rapaport)
Subject: followup on NL generation
In article <MS.lb0q.0.hatfield.284.1@andrew.cmu.edu>
lb0q@ANDREW.CMU.EDU (Leslie Burkholder) writes:
>Has work been done on the problem of generating relatively idiomatic English
>from sentences written in a language for first-order predicate logic?
>Any pointers would be appreciated.
>
>Leslie Burkholder
>lb0q@andrew.cmu.edu
We do some work on NL generation from SNePS, which can easily be translated
into pred. logic. See:
Shapiro, Stuart C. (1982), ``Generalized Augmented Transition Network
Grammars For Generation From Semantic Networks,'' American Journal of
Computational Linguistics 8: 12-25.
William J. Rapaport
Assistant Professor
Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260
(716) 636-3193, 3180
uucp: ..!{allegra,decvax,watmath,rocksanne}!sunybcs!rapaport
csnet: rapaport@buffalo.csnet
bitnet: rapaport@sunybcs.bitnet
------------------------------
Date: 20 Sep 86 15:41:26 edt
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: TMS, DDB and infinite loops
Date: Mon, 08 Sep 86 16:48:15 -0800
From: Don Rose <drose@CIP.UCI.EDU>
Does anyone know whether the standard algorithms for belief revision
(e.g. dependency-directed backtracking in TMS-like systems) are
guaranteed to halt? That is, is it possible for certain belief networks
to be arranged such that no set of mutually consistent beliefs can be found
(without outside influence)?
I think these are two different questions. The answer to the
second question depends less on the algorithm than on whether
the underlying logic is two-valued or three-valued. The answer
to the first question is that halting is only a problem when the
logic is two-valued and the space of beliefs isn't fixed during
belief revision [Satisifiability in the propositional calculus
is decidable (though NP-complete)]. Doyle's TMS goes into
infinite loops. McAllester's won't. deKleer's ATMS won't loop
either, but that's because it finds all the consistent
labelings, and there just might not be any. Etc, etc; depends
on what you consider ``standard.''
Walter Hamscher
------------------------------
Date: Sat, 20 Sep 86 15:02 PDT
From: dekleer.pa@Xerox.COM
Subject: TMS, DDB and infinite loops question.
Does anyone know whether the standard algorithms for belief revision
(e.g. dependency-directed backtracking in TMS-like systems) are
guaranteed to halt?
It depends on what you consider the standard algorithms and what do you
consider a guarantee? Typically a Doyle-style (NMTMS) comes in two
versions, (1) guaranteed to halt, and, (2) guaranteed to halt if there
are no "odd loops". Version (2) is always more efficient and is
commonly used. The McAllester-style (LTMS) or my style (ATMS) always
halt. I don't know if anyone has actually proved these results.
That is, is it possible for certain belief networks
to be arranged such that no set of mutually consistent beliefs
can be found (without outside influence)?
Sure, its called a contradiction. However, the issue of what to do
about odd loops remains somewhat unresolved. By odd loop I mean a node
which depends on its own disbelief an odd number of times, the most
trivial example being give A a non-monotonic justification with an empty
inlist and an outlist of (A).
------------------------------
Date: Tue 23 Sep 86 14:39:47-CDT
From: Larry Van Sickle <cs.vansickle@r20.utexas.edu>
Reply-to: CS.VANSICKLE@R20.UTEXAS.EDU
Subject: Money back on Turbo Prolog
Borland will refund the purchase price of Turbo Prolog
for sixty days after purchase. The person I talked to
at Borland was courteous, did not argue, just said to
send the receipt and software.
Larry Van Sickle
U of Texas at Austin
cs.vansickle@r20.utexas.edu 512-471-9589
------------------------------
Date: Tue 23 Sep 86 13:54:29-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Turbo Prolog
For another review of Turbo Prolog see the premier issue of AI Expert.
Darryl Rubin discusses several weaknesses relative to Clocksin-and-Mellish
prologs, but is enthusiastic about the package for users who have no
experience with (i.e., preconceptions from) other prologs. The Turbo
version is very fast, quite compact, well documented, comes with a
lengthy library of example programs, and interfaces to a sophisticated
window system and other special tools. It could be an excellent system
for database retrieval and other straightforward tasks. His chief
reservation was about the "subroutine call" syntax that requires
all legal arities and argument types to be predeclared and does not
permit use of comma as a reduction operator.
-- Ken Laws
------------------------------
Date: 19 Sep 86 14:27:15 GMT
From: sdcrdcf!darrelj@hplabs.hp.com (Darrel VanBuer)
Subject: Re: Dandelion vs Symbolics
A slight echo on the Interlisp file package (partly response to earlier note
on problems using MAKEFILE, and losing a bunch of user-entered properties.
Rule 1. Users never call MAKEFILE (in 9 years of Interlisp hacking, I've
probably called it half a dozen times).
So how do you make files? I mainly use two functions:
CLEANUP() or CLEANUP(file1 file2 ...) Former does all files containing
modifications, latter only named files. The first thing CLEANUP does
is call UPDATEFILES, which is also called by:
FILES?() Reports the files which need action to have up to date
source, compiled and hardcopies, also calls UPDATEFILES, which will
engage you in a dialog asking you the location of every "new" object.
Most of the ways to define or modify objects are "noticed" by the file
package (e.g. the structure editor [DF, EF, DV ...], SETQ, PUTPROP, etc which
YOU type at top level). When an object is noticed as modified, either the
file(s) containing it are marked as needing a remake, or it gets noted as
something to ask you about later. You can write functions which play the
game by calling MARKASCHANGED as appropriate.
Two global variables interact with details of the process:
RECOMPILEDEFAULT usually EXPRS or CHANGES. I prefer the former, but CHANGES
has been the default in Interlisp-D because EXPRS didn't work before
Intermezzo.
CLEANUPOPTIONS My setting is usually (RC STF LIST) which means as part of
cleanup, recompile, with compiler flags STF (F means forget source from in
core, filepkg will automagically retrieve it if you edit, etc), and make a
new hardcopy LISTing.
For real fun with filepkg and integration with other tools, try
MASTERSCOPE(ANALYZE ALL ON file1)
MASTERSCOPE(EDIT WHERE ANY CALLS FOO)
CLEANUP()
--
Darrel J. Van Buer, PhD
System Development Corp.
2525 Colorado Ave
Santa Monica, CA 90406
(213)820-4111 x5449
...{allegra,burdvax,cbosgd,hplabs,ihnp4,orstcs,sdcsvax,ucla-cs,akgua}
!sdcrdcf!darrelj
VANBUER@USC-ECL.ARPA
------------------------------
Date: Sat, 20 Sep 86 10:23:18 PDT
From: larus@kim.Berkeley.EDU (James Larus)
Subject: Symbolics v. Xerox
OK, here are my comments on the Great Symbolics-Xerox debate. [As
background, I was an experienced Lisp programmer and emacs user before
trying a Symbolics.] I think that the user interface on the Symbolics
is one of the poorest pieces of software that I have ever had the
misfortune of using. Despite having a bit-mapped display, Symbolics
forces you to use a one-window on the screen at a time paradigm. Not
only are the default windows too large, but some of them (e.g. the
document examiner) take over the whole screen (didn't anyone at
Symbolics think that someone might want to make use of the
documentation without taking notes on paper?). Resizing the windows
(a painful process involving a half-dozen mouse-clicks) results in
unreadable messages and lost information since the windows don't
scroll (to be fixed in Genera 7). I cannot understand how this
interface was designed (was it?) or why people swear by it (instead of
at it).
The rest of the system is better. Their Common Lisp is pretty solid
and avoids some subtle bugs in other implementations. Their debugger
is pretty weak. I can't understand why a debugger that shows the
machine's bytecodes (which aren't even documented for the 3600
series!) is considered acceptable in a Lisp environment. Even C has
symbolic debuggers these days! Their machine coexists pretty well
with other types of systems on an internet. Their local filesystem is
impressively slow.
The documentation is pretty bad, but is getting better. It reminds me
of the earlier days of Unix, where most of the important stuff wasn't
written down. If you had an office next to a Unix guru, you probably
thought Unix was great. If you just got a tape from Bell, then you
probably thought Unix sucked. There appears to be a large amount of
information about the Symbolics that is not written down and is common
knowledge at places like MIT that successfully use the machines.
(Perhaps Symbolics should ship a MIT graduate with their machines.)
We have had a lot of difficulty setting up our machines. Symbolics
has not been very helpful at all.
/Jim
------------------------------
Date: Tue Sep 23 12:31:35 GMT+1:00 1986
From: mcvax!lasso!ralph@seismo.CSS.GOV (Ralph P. Sobek)
Subject: Re: Xerox 11xx vs. Symbolics 36xx vs. ...
I enjoyed all the discussion on the pluses and minuses of these and other
lisp machines. I, myself, am an Interlisp user. Those who know a
particular system well will prefer it over another. All these lisp systems
are quite complex and require a long time, a year or so, before one achieves
proficiency. And as any language, human or otherwise, one's perception of
one's environment depends upon the tools/semantics that the language provides.
I have always found Interlisp much more homogeneous than Zetalisp. The
packages are structured so as to interface easily. I find the written
documentation also much more structured, and smaller, than the number of
volumes that come with a Symbolics. Maybe, Symbolics users only use the
online documentation and thus avoid the pain of trying to find something
in the written documentation. The last time I tried to find something in
the Symbolics manuals -- I gave up, frustrated! :-)
Interesting will be the future generation of lisp machines, after Common
Lisp.
Ralph P. Sobek
UUCP: mcvax!inria!lasso!ralph@SEISMO.CSS.GOV
ARPA: sobek@ucbernie.Berkeley.EDU (automatic forwarding)
BITNET: SOBEK@FRMOP11
------------------------------
Date: 22 Sep 86 12:28:00 MST
From: fritts@afotec
Reply-to: <fritts@afotec>
Subject: Associations -- Comment on AIList Digest V4 #186
The remark has been made on AIList, I think, and elsewhere that computers
do not "think" at all like people do. Problems are formally stated and
stepped through sequentially to reach a solution. Humans find this very
difficult to do. Instead, we seem to think in a series of observations
and associations. Our observations are provided by our senses, but how
these are associated with stored memory of other observations is seemingly
the key to how humans "think". I think that this process of sensory
observation and association runs more or less continuously and we are not
conciously aware of much of it. What I'd like to know is how the decision
is made to associate one observation with another; what rules of association
are made and are they highly individualized or is there a more general
pattern. How is it that we acquire large bodies of apparently diverse
observations under simple labels and then make complex decisions using
these simple labels rather than stepping laboriously through a logical
sequence to achieve the same end? There must be some logic to our
associative process or we could not be discussing this subject at all.
Steve Fritts
FRITTS@AFOTEC
------------------------------
Date: 22 Sep 86 09:01:50 PDT (Monday)
From: "charles←kalish.EdServices"@Xerox.COM
Subject: Re: intelligent machines
In his message, Peter Pirron sets out what he believes to be necessary
attributes of a machine that would deserved to be called intelligent
From my experience, I think that his intuitions about what it would take
for for a machine to be intelligent are, by and large, pretty widely
shared and as far as I'm concerned, pretty accurate. Where we differ,
though, is in how these intuitions apply to designing and demonstrating
machine intelligence.
Pirron writes: "There is the phenomenon of intentionality amd motivation
in man that finds no direct correspondent phenomenon in the computer." I
think it's true that we wouldn't call anything intelligent we didn't believe
had intentions (after all intelligent is an intentional ascription).
But I think that Dennet (see "Brainstorms") is right in that intentions
are something we ascribe to systems and not something that is built in
or a part of that system. The problem then becomes justifying the use
of intentional descriptions for a machine; i.e. how can I justify my
claim that "the computer wants to take the opponent's queen" when the
skeptic responds that all that is happening is that the X procedure has
returned a value which causes the Y procedure to move piece A to board
position Q?
I think the crucial issue in this question is how much (or whether) the
computer understands. The problem with systems now is that it is too
easy to say that the computer doesn't understand anything, it's just
manipulating markers. That is that any understanding is just
conventional -- we pretend that variable A means the Red Queen, but it
only means that to us (observers) not to the computer. How then could
we ever get something to mean anything to a computer? Some people (I'm
thinking of Searle) would say you can't, computers can't have semantics
for the symbols they process. I found this issue in Pirron's message
where he says:
"Real "understanding" of natural language however needs not only
linguistic competence but also sensory processing and recognition
abilities (visual, acoustical). Language normally refers to objects
which we first experience by sensory input and then name it." The
idea is that you want to ground the computer's use of symbols in some
non-symbolic experience.
Unfortunately, the solution proposed by Pirron:
"The constructivistic theory of human learning of language by Paul
Lorenzen und O. Schwemmer (Erlanger Schule) assumes a "demonstration
act" (Zeigehandlung) constituting a fundamental element of man (child)
learning language. Without this empirical fundament of language you
will never leave the hermeneutic circle, which drove former philosphers into
despair." ( having not read these people, I presume the mean something
like pointing at a rabbit and saying "rabbit") has been demonstrated by
Quine (see "Word and Object") to keep you well within the circle. But
these arguments are about people, not computers and we do (at least
feel) that the symbols we use and communicate with are rooted in
non-symbolic something. I can see two directions from this.
One is looking for pre-symbolic, biological constraints; Something like
Rosch's theory of basic levels of conceptualization. Biologically
relevant, innate concepts, like mother, food, emotions, etc. would
provide the grounding for complex concepts. Unfortunately for a
computer, it doesn't have an evolutionary history which would generate
innate concepts-- everything it's got is symbolic. We'd have to say
that no matter how good a computer got it wouldn't really understand.
The other point is that maybe we do have to stay within this symbolic
"prison-house" after all event the biological concepts are still
represented, not actual (no food in the brain just neuron firings). The
thing here is that, even though you could look into a person's brain
and, say, pick out the neural representation of a horse, to the person
with the open skull that's not a representation, it constitutes a horse,
it is a horse (from the point of view of the neural sytem). And that's
what's different about people and computers. We credit people with a
point of view and from that point of view, the symbols used in
processing are not symbolic at all, but real. Why do people have a
point of view and not computers? Computers can make reports of their
internal states probably better than we. I think that Nagel has hit it
on the head (in "What is it like to be a Bat" I saw this article in "The
Minds I") with his notion of "it is (or is not) like something to be
that thing." So it is like something to be a person and presumably is
not like something to be a computer. For a machine to be intelligent
and truly understand it must be like something to be that machine. Only
then can we credit that machine with a point of view and stop looking at
the symbols it uses as "mere" symbols. Those symbols will have content
from the machine's point of view. Now, how does it get to be like
something to be a machine? I don't know but I know it has a lot more to
do with the Turing test than what kind of memory orgainization or search
algorithms the machine uses.
Sorry if this is incoherent, but it's not a paper so I'm not going to
proof it. I'd also like to comment on the claim that:
" I would claim, that the conviction mentioned above {that machines
can't equal humans} however philosphical or sophisticated it may be
justified, is only the "RATIONALIZATION".. of understandable but
irrational and normally unconscious existential fears and need of human
beings" but this message is too long anyway. Suffice it too say that
one can find a nasty Freudian interpretation of any point.
I'd appreciate hearing any comments on the above ramblings.
-Chuck
ARPA: chuck.edservices@Xerox.COM
------------------------------
End of AIList Digest
********************
∂25-Sep-86 0528 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #197
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 25 Sep 86 05:27:59 PDT
Date: Wed 24 Sep 1986 21:49-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #197
To: AIList@SRI-STRIPE
AIList Digest Thursday, 25 Sep 1986 Volume 4 : Issue 197
Today's Topics:
AI Tools - University of Rochester HORNE System &
Parallel Inference System at Maryland,
Conferences - Upcoming Conference Programs (FJCC, COMPSAC, OIS,
Info. and Software Sciences, Chautaqua)
----------------------------------------------------------------------
Date: Thu, 11 Sep 86 13:30 EDT
From: Brad Miller <miller@UR-ACORN.ARPA>
Subject: New University of Rochester HORNE system available
The University of Rochester HORNE reasoning system has just been rereleased in
common-lisp form, currently running on a symbolics (though any common-lisp
system should be able to run it with minor porting).
Features:
Horne Clause resolution prover (similar to PROLOG) with typed
unification and specialized reasoner for equalities (e.g. A and B can be
asserted to be equal, and so will unify). Equalities can be asserted between
any ground forms including functions with ground terms. A forward chaining
proof mechanism, and an interface between this system and arbitrary
common-lisp forms are also provided.
As part of the same release we are providing REP, a frame-like
knowledge representation system built on top of the theorem prover, which uses
sturctured types to represent sets of objects. A structured type may have
relations (or "roles") between its set of objects and other sets. Arbitrary
instances of an object may be asserted to be equal to another instance which
will utelize the underlying HORNE equality mechanisms.
HORNE is the product of several years of R&D in the Natural Language
Understanding and Knowledge Representation projects supervised by Prof. James
Allen at the University of Rochester, and forms the basis for much of our
current implementation work.
A tutorial introduction and manual, TR 126 "The HORNE reasoning system in
Common-Lisp" by Allen and Miller is available for $2.50 from the following
address:
Ms. Peg Meeker
Technical Reports Administrator
Department of Computer Science
617 Hylan Building
University of Rochester
River Campus
Rochester, NY 14627
In addition a DC300XL cartridge tape in Symbolics distribution format, or
Symbolics carry-tape format (also suitable for TI Explorers), or a 1/2"
1600bpi reel in 4.2BSD TAR format (other formats are not available) is
available from the above address for a charge of $100.00 which will include
one copy of the TR. This charge is made to defray the cost of the tape,
postage, and handling. The software itself is in the public domain. Larger
contributions are, of course, welcome. Please specify which format tape you
wish to receive. By default, we will send the Symbolics distribution format.
All checks should be made payable to "University of Rochester, Computer
Science Department". POs from other Universities are also acceptable. Refunds
for any reason are not available.
DISCLAIMER: The software is supplied "as-is" without any implied warrenties of
merchantability or fitness for a particular purpose. We are not responsible
for any consequential damages as the result of using this software. We are
happy to accept bug reports, but promise to fix nothing. Updates are not
included; future releases (if any) will probably be made available under a
similar arrangement to this one, but need not be. In other words, what you get
is what you get.
Brad Miller
Computer Science Department
University of Rochester
miller@rochester.arpa
miller@ur-acorn.arpa
------------------------------
Date: Thu, 11 Sep 86 17:08:33 EDT
From: Jack Minker <minker@mimsy.umd.edu>
Subject: Parallel Inference System at Maryland
[Excerpted from the Prolog digest by Laws@SRI-STRIPE.]
AI and Database Research Laboratory
at the
University of Maryland
Jack Minker - Director
The AI and Database Research Laboratory at the Univer-
sity of Maryland is pleased to announce that a parallel
logic programming system (PRISM) is now operational on the
McMOB multiprocessosor. The system uses up to sixteen pro-
cessors to exploit medium grained parallelism in logic pro-
grams. The underlying ideas behind PRISM appeared in [Eis-
inger et. al., 1982] and [Kasif et. al., 1983].
[...]
If you would like further information on PRISM, please
contact MINKER@MARYLAND or MADHUR@MARYLAND. We would also
be very interested in hearing from people who may have prob-
lems we could run on PRISM.
References:
1. Eisinger, N., Kasif, S., and Minker, J., "Logic Pro-
gramming: A Parallel Approach", in Proceedings of the
First International Logic Programming Conference, Mar-
seilles, France, 1982.
2. Kasif, S., Kohli, M., and Minker, J., "PRISM - A Paral-
lel Inference System for Problem Solving", in IJCAI-83,
Karlsruhe, Germany, 1983.
3. Rieger, C., Bane, j., and Trigg, R., "ZMOB: A Highly
Parallel Multiprocessor", University of Maryland, TR-
911, May 1980
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: ****** AI AT UPCOMING CONFERENCES ******
AI papers at November 2-6, 1986 FJCC, Dallas Texas
Professional Education Program Items
John D. McGregor, Building Expert Systems Workshop
Loiss Boggess and Julia Hodges, Knowledge-Based-Based Expert
Systems
Benjamin Wah,Architectures for AI Applications
Michael Lebowitz, Natural Language Processing
Michael Lebowitz, Machine Learning
Paul Bamberg, Speech Recognition:From Isolated Digits to Natural
Language Dissertation
John Kender and Takeo Kanade, Computer Vision from an AI Perspective
Douglas DeGroot, Prolog and Knowledge INfo Processing
Harland H. Black, AI Programming and Environments
Paper Sessions
AI-1, November 4, 1:30 PM to 3:30 PM
Panel Session on "Design Issues and Practice in AI Programming"
AI-2 Session 1, November 5, 10:00 am to noon, Computer Vision
Generic Surface Interpretation Inference Rules and Quasi-Invariants
Thomas Binford, Stanford U.
An Overview of Computation of Structure and Motion From Images
J. K. Aggarwal, University of Texas at Austin
Industrial World Vision
Robert Haralick, Machine Vision International
AI-2 Session 2, November 5, 1:30 PM-3:30PM
Survey of Image Quality Measurements
I. Abdou and N. Dusaussoy, University of Delaware
A Spatial Knowledge Structure for Image Information Systems Using
Symbolic Projects
S. K. Chang, U. of Pittsburgh, E. Jungert, FFV Elektronic A. B.
Document Image Understanding
S. N. Srihari, SUNY at Buffalo
AI-3 Session 1, November 5 3:45 PM- 5:15 PM, Robotics
Living in a Dynamic World
Russell A. Andersson, AT&T Bell Labs
Error Modeling in Stereo Navigation
L. Matthies and S. A. Shafer, Carnegie Mellon U
CMU Sidewalk Navigation System
Y. Goto, et. al. Carnegie Mellon U.
AI-3 Session 2, November 6 10AM - noon
Automatic Gasp Planning: An Operation Space Approach
M. T. Mason R. C. Brost Carnegie Mellon U.
Planning Stable Grasps for Multi-fingered Hands
V. Nguyen, MIT
Off-line Planning for On-line Object Localization
T. Lozano-perez, W.E. Grimson, MIT
AI-3 Session 3, Novmeber 6, 1:30 PM - 3:30pm
AMLX: A Manufacturing Language/Extended
L. Nackman, et al. IBM t. J. Watson Research Center
SATYR and the NYMPH: SoftwareDesign in a Multiprocessor
for Control Systems
J. B. Chen et. al. Stanford University
The Meglos User Interface
R. Gaglianello and H. Katseff, AT&T Bell Laboratories
A Robot Force and Motion Server
R. Paul and H. Zhang, University of Pennsylvania
AI4, Session 1 November 5, 1:30pm - 3:30 pm, Rule Based
Systems
The AI-ADA Interface
Dr. Jorge Diaz-Herrera, George Mason University
The AI-LiISP Environment
Dr. Harry Tennant, Texas Instruments
The AI PROLOG Environment: SIMPOS- Sequential Inference
Machine Programming
Drs. H. Ishibashi, T. Chikayama, H. Sato, M. Sato and
S. Uchida, ICOT Research Center
Software Engineering for Rule-Based systems
R. J. K. Jacob and J. N. Froscher, Naval Research
Laboratory
Session 2: Knowledge Engineering pannel, November 5,
3:45 PM - 5:15 PM
Dr. Richard Wexelblat, Philips Lab, Chair
Dr. Paul Benjamin, Philips Laboratories
Dr. Christina Jette, Schlumberger Well Services
Dr. STeve Pollit, Digital Equipment
Session 3:, November 6, 1:30PM - 3:30 PM
"An Organizational Frameworks for Building Adapative Artificial
Intelligence Systems:
T. Blaxton and B. Kushner, BDM Corporation
"An Object/Task Modelling Approach"
Q. Chen, Beijing Research Institute of Surveying and Mapping
"A Plant INtelligent Supervisory Control Expert System:
M. Ali and E. Washington, Unviersity of Tennessee
"Knowledge-Based Layout Design System for Industrial Plants"
K. Yoshida, et. al., Hitachi Ltd.
Session 4: Prolog and Frame based Methods, November 6, 3:45 pm to 5:15 pm
"A Logic-Programming Approach to Frame-Based Language Design"
H. H. Chen, I. P. Lin and C. P. Wu, National Taiwan University
"Interfacing Prolog to Pascal"
K. Magel, North Dakota State University
"Knowledge-Based Optimization in Prolog Compiler"
N. Tamura, Japan Science Institute, IBM Japan
Natural Language Processing, Session 1, Nov. 4 10AM - noon
"Communication with Expert Systems"
Kathleen R. McKeown, Columbia University
"Language Analysis in Not-So-Limited Domains"
Dr. Paul S. Jacobs, General Electric, R&D
"Providing Expert Systems with INtegrated Natural Language and Graphical
Interfaces"
Dr. Philip J. Hayes, Carnegie Group Inc.
"Pragmatic Processes in a Portable NL System"
Dr. Paul Martin, SRI←AI Center
Session 2:Nov 4 1:30-3:30pm
"Uses of Structured Knowledge Representation Systems in Natural Language
Processing"
N. Sondheimer, University of Southern California
"Unifying Lexical, Syntactic and Semantic Text Processing"
K. Eiselt, University of California at Irvine
"Robustness in Natural Language Interfaces"
R. Cullingford, Georgia Tech
"Connectionist Approaches to Natural Language Processing"
G. Cottrell, UC of San Diego
Panel: Problems and Prospects of NLP NOvember 4, 3:45pm - 5:15pm
Chair: Dr. Philip J. Hayes
Gene Charniak, Brown University, Dave Waltz Thinking Machines
Robert Wilensky, UC at Berkeley, Gary Hendrix, Symantec, Jerry Hobbs, SRI
"Parallel Processing for AI" Tuesday November 4 10am - 12noon
"Parallel Prodcessing of a Knowledge-Based Vision System"
D. I. Moldovan and C. I. Wu, USC
"A Fault Tolerant, Bit-Parallel, Cellular Array Processor"
S. Morton, ITt-Advanced Technology Center
"Implementation of Parallel Prolog onTree Machines"
M. Imai, Toyohashi University of Technology
"Optimal Granularity of Parallel Evaluation of AND-Trees"
G. J. Li and B. W. Wah, University of Illinois at Urbana
(some of the following sessions contain non-AI papers that are not listed)
Session 2: New Directions in Optical Computing" November 4 1:30pm - 3:30 pm
"Optical Symbolic Computing" Dr. John Neff, DARPA/DSO andB. Kushner, BDM Co.
VLSI Design and Test: Theory and Practice, Nov 4 10AM - 12 noon
A Knowledge-Based TDM Selection System
M. E. Breuer and X. Zhu, USC
Expert Systems for Design and Test Thursday, November 6, 10AM - 12 noon
DEFT, A Design for Testability Expert System
J. A. B. Fortes and M. A. Samad
Experiences in Prolog DFT Rule Checking
G. Cabodi, P. Camurati and P. Prinetto, Politecnico di Torino
Object-Oriented Software, Tuesday, November 4 1:30pm - 3:30 pm
"Some Problems with Is-A: Why Properties are Objects"
Prof. Stan Zdonik, Brown University
Computer Chess Techniques
"Phased State Space Search" T. A. Marsland, University of Alberta and
N. Srimani, Southern Illinois U.
"A MultiprocessorChess Program" J. Schaeffer, University of Alberta
Panel Discussion
Tony Marsland, U. of Alberta, Hans Berliner, CMU, Ken Thompson, AT&T Bell Labs
Prof. Monroe Newborn, McGill University, David Levy, IntelligentSoftware,
Prof. Robert Hyatt,U. of Southern Mississippi
Searching, Nov 6, 10AM - 12 noon
"Combining Symmetry and Searching",
L. Finkelstein, et al. Northeastern University
Fifth Generation Computers I: Language Arch, Nov 5, 10AM - 12 noon
Knowledge-Based Expert System for Hardware Logic Design
T. Mano et. al. Fujitsu
Research Activities on Natural Language Processing of the FGCS Project
H. Miyhoshi, et al., ICOT
ARGOS/V: A System for Verification of Prolog Programs
H. Fujita, et al., Mitsubishi Electric
Session 4: "Supercomputing Systems" November 6 10:00am - noon
The IX Supercomputer for Knowledge Based Systems
T. Higuchi, et al. ETL
(There are positions as volunteers available for which you get to attend
the conference and get a copy of the proceedings in exchange for helping
out one day. If interested call, 409-845-8981. The program is oriented
towards graduate students and seniors.)
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Compsac 86, Conference October 7-10 1986, Americana Congress Hotel, Chicago Ill
Tutorial: October 6, 1986, 9AM - 5PM
Doug DeGroot, Prolog and Knowledge Information Processing
October 8 11:00 AM - 12:30 PM
Modularized OPS-Based Expert Systems Using Unix Tools
Pamela T. Surko, AT&T Bell Labs
Space Shuttle Main Engine Test Analysis: A Case Study for Inductive Knowledge
Based systems for Very Large Databases
Djamshid Asgari, Rockwell International
Kenneth L. Modesitt, California State University
A Knowledge Based Software Maintenance Environment;
Steven S. Yau, Sying-Syang Liu, Northwestern University
October 8 2:00PM - 3:30 PM
An Evaluation of Two New INference Control Methods
Y. H. Chin, W. L. Peng, National Tsing Hua University, Taiwan
Learning Dominance Relations inCombinatorial Search Problems
Chee-Fen Yu, Benjamin Wah of University of Illinois at Urbana-Champaign, USA
Fuzzy Reasoning Based on Lambda-LH Resolution
Xu-Hua Liu, Carl K. Chang and Jing-Pha Tsai, University of Illinois at Chicago
4:00-5:30PM Panel Discussion on the Impact of Knowledge-Based Technology
Chair: Charl K. Chang, University of Illinois at Chicago
Panelists: Don McNamara GE Corporate Research, Kiyoh Nakamura, Fujitsu (Japan),
Wider Yu, AT&T Bell Labs, R. C. T. Lee, National Hsing Hua Univeristy, Taiwan
Thursday, October 9, 1986, 10:30 - 12:00 PM
Special Purpose Computer Systems for Supporting AI Applications
Minireview by Benjamin Wah, University of Illinois at Urbana-Champaign
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
ACM Conference on Office Information Systems, October 6-8 1986, Providence
Rhode Island
October 6, 1986 2:45 - 4:5PM
Adaptive Interface Design: A Symmetric Model and a
Knowledge-Based Implementation
Sherman W. Tyler, Siegfried Treu, University of Pittsburgh
Automating Review of Forms for International Trade Transactions: A Natural
Language Processing Approach
V. Dhar, P. Ranganathan
October 8, 1986 9:10:15AM
Panel on "AI in the Office", Chair Gerald Barber
October 8, 1986 10:30 - 12:00 AM Organizational Analysis: Organizational Ecology
Modelling Due Process in the Workplace
Elihu M. Gerson, Susan L. Star, Tremont Research Institute
An Empirical Study of the INtegration of Computing into Routine Work
Les Gasser, University of Southern California
Offices are Open Systems
Carl Hewitt, MIT Artificial Intelligence Lab
October 8, 1986 1:00 - 2:30PM
Handling Shared Resources in a Temporal Data Base Management System
Thomas L. Dean, Brown University
Language Constructs for Programming by Example
Robert V. Rubin, Brown University
Providing Intelligent Assistance in Distributed Office Environments
Sergei Nirenburg, Victor Lessor Colgate University/ University of Massachusett
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Fourth Symposium on Empirical Foundations of Information and Software Sciences
October 22-24 Atlanta Georgia
October 22, 1:30-3:15 PM
Expert Systems for Knowledge Engineering: Modesof Development
Glynn Harmon, University of Texas, Austin
October 23, 10:45 - 12:30 AM
Face to Machine Interaction in Natural Language: Empirical Results of Field
Studies with an English and German Interface
Juergen Krause, Universitaet Regensburg, F. R. Germany
October 24, 9:00 - 10:30AM
Evaluating Natural Language INterfaces to Expert Systems
Ralph M. Weischedel BBN, Cambridge MA
Couting Leaves: An Evaluation of Ada, LISP and Prolog
Jagdish C. Agrawal, Embry-Riddle Aeronautical University, Daytona Beach, FL
Shan Manicam, Western Carolina University, Cullowhee, NC
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
The Fourth Chautaqua, October 25-29, 1986, Coronado, California
Session 9 Knowledge Based Systems 10:30-12:30PM
Knowledge-based Systems Development of CNC Software,
Roy Tsui, Software R&D Engineer, Pneumo Precision, INc., Allied Company
Towards Resident Expertise in Systems Design
Dr. Medhat Karima, CAD/CAM Consultant, Ontario CAD/CAM Center
The Engineer as an Expert System Builder
Dr. Richard Rosen, Vice President, Product Development, Silogic Inc.
An Overview of Knowledge-Based Systems for Design and Manufacturing
Dr. Larry G. Richards, Director, Master's Program, University of Virginia
------------------------------
End of AIList Digest
********************
∂26-Sep-86 1659 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #198
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 26 Sep 86 16:59:24 PDT
Date: Fri 26 Sep 1986 10:45-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #198
To: AIList@SRI-STRIPE
AIList Digest Friday, 26 Sep 1986 Volume 4 : Issue 198
Today's Topics:
Correction - Learned Information Address,
Queries - Computers and Writing & Prospector Shell for IBM-PC &
Learning via ES Rule Refinement & Character Recognition,
AI Tools - OPS5 on the PC & Turbo Prolog &
Xerox vs Symbolics Storage Reclaimation,
Review - Spang Robinson Summary, August 1986
----------------------------------------------------------------------
Date: Thu, 25 Sep 86 03:49:19 EDT
From: Marty Lyons <MARTY%ORION.BITNET@WISCVM.WISC.EDU>
Subject: Address correction for ref. in Vol 4, Issue 195
Just in case someone might have problems with USPS, Medfor
should read Medford below. (Actually, mail to them should get
there anyway, as long as you remember the zip, but just in case...)
> AIList Digest Thursday, 25 Sep 1986 Volume 4 : Issue 195
>
>Date: 18 Sep 86 19:21:50 GMT
>From: ssc-vax!bcsaic!pamp@uw-beaver.arpa (wagener)
>Subject: Re: Info on UNIX based AI Tools/applications (2nd req)
> 1) Expert Systems - The Ineternational Journal of
> Knowledge Engineering;Learned Information Ltd.,
> (This is an English Publication. It's US office
> address is;
> Learned information Co.
> 143 Old Marlton Pike
> Medfor,NJ 08055
*** Typo... ****** This should be Medford
------------------------------
Date: Thu, 25 Sep 86 09:59 EDT
From: Hirshfield@RADC-MULTICS.ARPA
Subject: Computers and Writing - A Solicitation
I am soliciting contributions for a volume entitled Computers and
Writing: Theory and Research to be published as part of Ablex
Publishing's Writing Research Series. As the title implies, the volume
will be devoted to research and theoretical investigations of the
interactions of computing and writing and will focus on long- range
prospects. Potential contributors include Richard Mayer, Colette
Daiute, Cynthia Selfe and Jim Levin.
I would be pleased to hear of any papers or any ongoing studies that
relate to this exciting topic. Please respond asap by net to Hirshfield
at RADC-multics, or write directly to Stuart Hirshfield, Department of
Mathematics and Computer Science, Hamilton College, Clinton, NY 13323.
------------------------------
Date: 25 Sep 1986 17:48 (Thursday)
From: munnari!nswitgould.oz!wray@seismo.CSS.GOV (Wray Buntine)
Subject: Prospector ESs for IBM-PC
OK, I've seen the recent list of IBM-PC Expert System Shells,
But which PROSPECTOR-type shells have the following
ability to link in external routines
i.e. we have some C code that provides answers for some leaf nodes
I'd be grateful for any pointers re reliability and backup as well.
Wray Buntine
wray@nswitgould.oz.au@seismo
seismo!munnari!nswitgould.oz!wray
Computing Science
NSW Inst. of Tech.
PO Box 123, Broadway, 2007
Australia
------------------------------
Date: 26 Sep 1986 11:08-EDT
From: Hans.Tallis@ml.ri.cmu.edu
Subject: Learning via ES Rule Refinement?
I am working in learning by refining a given set of
expert system rules. Ideally the learning cycle will involve no
humans in the loop. I am familiar with Politakis's SEEK work already, but
pointers to other programs would be greatly appreciated.
--tallis@ml.ri.cmu.edu
------------------------------
Date: Thu, 25 Sep 86 11:10:16 edt
From: philabs!micomvax!peters@tezcatlipoca.CSS.GOV
Reply-to: micomva!peters@tezcatlipoca.CSS.GOV (peter srulovicz)
Subject: character recognition
We are starting a project that will involve a fair amount of character
recognition, both typed and handwritten. If anyone out there has information
about public domain software or software that can be purchased please let me
hear from you.
email: !philabs!micomvax!peters
mail: Peter Srulovicz
Philips Information Systems
600 Dr. Philips Blvd
St. Laurent Quebec
Canada H4M-2S9
------------------------------
Date: 26 Sep 1986 11:13:45 EDT
From: David Smith <DAVSMITH@A.ISI.EDU>
Subject: OPS5 on the PC
There is an OPS5 called TOPSI available for the IBM PC from
Dynamic Master Systems, Inc (404)565-0771
------------------------------
Date: Thu, 25 Sep 86 12:09:16 GMT
From: Gordon Joly <XTSY13%syse.surrey.ac.uk@Cs.Ucl.AC.UK>
Subject: Re: What's wrong with Turbo Prolog
Was Clocksin and Mellish handed down on tablets of stone? An which PROLOG
can claim to fulfill all the theoretical goals, eg be truly declarative?
Gordon Joly.
INET: joly%surrey.ac.uk@cs.ucl.ac.uk
EARN: joly%uk.ac.surrey@AC.UK
------------------------------
Date: 25 Sep 1986 14:45:40 EDT (Thu)
From: Dan Hoey <hoey@nrl-aic.ARPA>
Subject: Xerox vs Symbolics -- Reference counts vs Garbage collection
In AIList Digest V4 #191, Steven J. Clark responds to the statement
that ``Garbage collection is much more sophisticated on Symbolics''
with his belief that ``To my knowledge this is absolutely false. S.
talks about their garbage collection more, but X's is better.''
Let me first deplore the abuse of language by which it is claimed that
Xerox has a garbage collector at all. In the language of computer
science, Xerox reclaims storage using a ``reference counter''
technique, rather than a ``garbage collector.'' This terminology
appears in Knuth's 1973 *Art of Computer Programming* and originated in
papers published in 1960. I remain undecided as to whether Xerox's
misuse of the term stems from an attempt at conciseness, ignorance of
standard terminology, or a conscious act of deceit.
The question remains of whether Interlisp-D or Zetalisp has the more
effective storage reclamation technique. I suspect the answer depends
on the programmer. If we are to believe Xerox, the reference counter
technique is fundamentally faster, and reclaims acceptable amounts of
storage. However, it is apparent that reference counters will never
reclaim circular list structure. As a frequent user of circular list
structure (doubly-linked lists, anyone?), I find the lack tantamount to
a failure to reclaim storage. Apparently Xerox's programmers perform
their own careful deallocation of circular structures (opening the
cycles before dropping the references to the structures). If I wanted
to do that, I would write my programs in C.
I have never understood why Xerox continues to neglect to write a
garbage collector. It is not necessary to stop using reference counts,
but simply to have a garbage collector available for those putatively
rare occasions when they run out of memory.
Dan Hoey
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Spang Robinson Summary, August 1986
Spang Robinson Report Summary, August 1986, Volume 2 No. 8
23 Artificial Intelligence Application Products are out and are being used by
customers.
Spang Robinson tracked down 92 specific applications in 56 different
companies, agencies or institutions that are being used by someone
other than the developers. 24 of these are in diagnostics, 22 in
manufacturing, 14 in computers, 6 in geology, 6 in chemistry, 5 in
military, 4 in agriculture, 4 in medicine and 7 in "other".
DEC has 20 expert systems in use with 50 under development. IBM has
six in use and 64 in development.
TSA Associates that there are 1000 applications fielded on microcomuters.
Dataquest claims that revenues from shell products will reach 44
million in 1986, up from 22 million in 1985. The majority of this is
for product training as opposed to actual price for the product. They
are estimating expert systems applications to reach ten million.
AIC has sold 500 copies of Intellect, a high-end natural language
package and will receive 6 to 8 million dollars of revenue in 1986.
Symantec's Q&A has sold 17,000 copies of Q&A, a [micro - LEFF] product
with embedded natural language.
There are 24 to 30 companies with viable commercial speech recognition
products with market growth between 20 and 30 percent. The 1986
market will be 20 million up from 16 million.
There are 100 companies in machine vision. 1985 market is estimated
at 150 million dollars. General Motors bought 50 million of these
products.
Also, there is a discussion of estimates of how many working expert
systems there are for each expert-shell product.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Micro Trends
Teknowledge has 2500 run-time systems. Level 5 has 50 completed applications
with 200 run-time systems sold. One of these systems has 3000 rules spread
across nine knowledge bases for one system. Exsys has 200 applications with
2100 run-times.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
List of commercially available expert systems
Bravo: VLSI circuit design and layout (applicon)
Equinox: sheet metal design (applicon)
Mechanical Advantage 1000: MCAE with intelligent sketchpad (cognition)
Manufacturing and Operations Management and Financial Advisor (Palladian)
Expert Manufacturing Planning Systems (Tipnis, Inc.)
PlanPower: financial planning system (Applied Expert System)
Planman and Database; financial planning and report writer (Sterling
Wentworth Corp.)
Profit Tool: financial services sales aid (Prophecy Development Corp)
Stock Portfolio Analysis and Futures Price Indexing (Athena Group, NY)
Newspaper Layout System (Composition Systems)
CEREBRAL MANAGER: manages document release process (KODAK)
ICAD: production design system (ICAD, Inc.)
MORE: direct marketing advisor and evaluation of mailing lists
ULTRAMAX: a self-learning expert system to optimize operations (Ultramax Corp.)
TRANSFORM/IMS (applications generator in COBOL (Transform Logic, Inc.)
TIMM TUNER: tuning for DEC VAXs (General Research Corporation)
HYPERCALC: an intelligent spreadsheet for LISP machines (Chaparral Dallas)
REFINE: knowledge based software development environment (Reasoning
systems, Inc.)
XMP: Expert Project Manager (XSP Corporation)
LEXAN: diagnostics for injection-molded plastic parts (GE)
Internally developed expert systems
Computers and electronics
XCON,XSEL, XSITE, configures VAX orders, checks them for accuracy and plan site
layout
CALLISTRO: assisting in managing resources for chip designers (DEC)
DAS-LOGIC assists with logic designers
COMPASS analyzes maintenance records for telephone switching system
and suggests maintenance actions
???? - System for design of digital circuits (Hughes)
CSS: aids in planning relocation, reinstallation and rearrangement of
IBM mainframes (IBM)
PINE: guides people writing reports on analysis of software problems (IBM)
QMF Advisor: used by customer advisors to help customers access IMS
databases (IBM)
Capital Assests Movements: help move capital assets quickly
OCEAN: checks orders for computer systems (NCR)
Diagnostic and/or preventive maintenance systems, internal use
AI-Spear: tape drives (DEC)
NTC: Ethernet and DECNET networks (DEC)
PIES circuit fabrication line (Fairchild)
Photolithographjy advisor: photolithography steps (Hewlett-Packard)
DIG Voltage Tester: digital voltage sources in testing lab (Lockheed)
BDS: baseband distribution system of commuications hardware (Lockheed)
ACE: telephone lines (Southwest Bell)
DIAG8100 DP equipment (Travelers Insurance)
????: soup cookers (Campbell Soups)
Engine Cooling Advisor: engine cooling system (DELCO Products)
???? - peripherals (Hewlett-Packard)
PDS: machine processes (Westinghouse)
DOC: hardware and software bug analysis for Prime 750 (Prime)
???: hardware (NCR)
TITAN: TI 990 Minicomputer (Radian/TI)
Radar Tracking: object tracking software for radar
(Arthur D. Little/Defense Contractor)
????: circuit board (Hughes)
XMAN: aircraft engines (Systems Control Technology/Air Force Logistics Command)
????: circuit fault (Maritn Marietta)
????: power system diagnosis (NASA)
Manufacturing or design, internal developed
????: brushes and springs for small electric motors (Delco)
ISA: schedules orders for manufacturing and delivery (DEC)
DISPATCHER: schedules dispatching of parts for robots (DEC)
ISI: schedules manufacturing steps in job shop (Westinghouse)
CELL DESIGNERS: reconfigures factories for group technologies (Arthur Anderson)
WELDSELECTOR: welding engineering (Colorodo School of Mines and TI)
????: configures aircraft electrical system components (Westinghouse)
CASE: electrical connector assembly (BOEING)
FACTORY LAYOUT: ADL
TEST FLOW DESIGN: quality test and rework sequencing (ADL for defense
contractor)
PTRANS: planning computer systems (DEC/CMU)
PROCESS CONTROL: monitors alkylation plant (ADL)
TEST FOR STORAGE SUBSYSTEM HARDWARE: IBM
???: Capacity Planning for System 38 (IBM)
??? optimization of chemical plant for EXXON
???: manage and predict weather conditions TEXACO
???: manufacturing simulation BADGER CO.
???: expert system connected to robot HERMES (Oak Ridge National Lab)
???: nuclear fuel enhancement (Westinghouse)
???: dry dock loading (General Dynamics)
Medicine, internal development
????: serum protein analysis: Helena Labs
PUFF: pulmonary function test interpretation: Pacific Medical Center
ONCOCIN: cancer therapy manager: Stanford Oncology Clinic
CORY: diagnoses invasive cardiac testas: Cedars Sinai Medimum Center
TQMSTUNE: tunes tripple quadrupole mass spectrometer
(Lawrence Livermore National Labs)
DENDRAL: Molecular Design, Ltd.
Synchem: plans chemical synthesis tests: SUNY-Stonybrook
THEORISTS: polymer properties (3M)
???: organic chemical analysis (Hewlett-Packard)
APPL: real time control of chemical processes related to aircraft parts
(Lockheed-Georgia)
Geology Internally Developed Systems
SECOFOR: drill bit sticking problems (Elf-Aquitatine)
GEOX: identifies earth minerals from remotely sensed hyperspectral image data
(NASA)
MUDMAN: diagnoses drilling mud problems (NL Industries)
oNIX and DIPMETER ADVISOR: oil well logging data related systems(Schlumberger)
TOGA: analyze power transformation conditions (Radian/ for Hartford
Steamboiler, Inspection and Insurance Co.)
Agriculture Internally Developed Systems
WHEAT COUNSELOR: diasease control (ICI)
POMME: apple orchard management (VA Poly inst.)
PLANT/cd and PLANT/ds: soybean diseases (University of Illinois)
GRAIN MARKETING ADVISOR: (PUrdue University and TI)
Military
AALPS: cargo planning for aircraft (US Army)
RNTDS: design command and control programs for ships (Sperry)
SONAR DOME TESTING: analysis of trials of sonar systems (ADL for defence
contractor)
NAVEX: assistant to shuttle operations (NASA)
IMAGE INTERPRETATION: analyse aerial reconniassance photos (ADL for defense
contractor)
Other
INFORMART ADVISOR: Advises shoppers on computer purchases
TVX: Teaches VMS operating systems (DEC)
DECGUIDE: teaches rules for design checking (Lockheed)
SEMACS: monitors Securities INdustry Automation Companies Network (SIAC/Sperry)
Financial Statement Analyser: Arthur Anderson
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Neuron Data plans to have NEXPERT running on the PC/AT, and the MICRO VAX.
The new system will have frames, object hierarchies and the ability to
move data among concurrently running programs which will allow them to
do blackboarding.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Paine Webber has downgraded Symbolics from "Buy" to "Attractive" due
to 'market place confusion caused by Symbolics imminent transition to
gate-array-based."
Intellicorp got a "neutral' rating from Paine Webber due to the fact that
it runs 'unaacceptably slowly' and that 'rapid expansion and redeployment
of talent may strain IntelliCorp's sale forces ability to produce'
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Symbolics prices
3620 will sell at $49,900 and 3650 will sell for $65,900. Symbolics
has introduced a product to allow developers to prevent users from accidentally
accessing underlying software utilities.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Ibuki has announced Kyoto Common Lisp. It takes 1.4MB with the kernel
in C. It costs $700.00 and runs on AT&T 3B2, Integrated Solutions,
Ultrix, Suns, and 4bsd
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Integrated Inference Machines has announced SM45000 symbolic
machines. It is microcodable for various languages and costs from $39,000
to $44,000. The company claims more performance than a Symbolics.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
reviews of Wendy B. Rauch-Hindin's two volume Artificial Intelligence in
Business, Science and Industry, Artificial Intelligence Enters the Marketplace
by Larry Harris and Dwight Davis. and Who's Who in Artifificial Intelligence.
The latter contains 399 individual biographies as well as other info.
------------------------------
End of AIList Digest
********************
∂26-Sep-86 2251 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #199
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 26 Sep 86 22:51:29 PDT
Date: Fri 26 Sep 1986 10:54-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #199
To: AIList@SRI-STRIPE
AIList Digest Friday, 26 Sep 1986 Volume 4 : Issue 199
Today's Topics:
Review - Canadian Artificial Intelligence, June 1986,
Philosophy - Intelligence, Consciousness, and Intensionality
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Canadian Artificial Intelligence, June 1986
Summary:
Report of Outgoing and Ingoing Presidents
Interact R&D Starts AI division
Review of 1986 Canadian AI Conference at Montreal. It had 375
people registered. Best appaer was James Delgrande of Simon
Fraser University.
The Canadian Society for Computational Studies of Intelligence is
now up to 800 from 250 two years ago. (This was prior to including
people who became members upon paying non-member fees at the Canadian
AI conference).
Proceedings of the 1986 Conference costs $30.00
Contents
Why Kids Should Learn to Program,
Elliot Soloway, Yale University
Generative Structure in Enumerative Learning Systems
Robert C. Holte, Brunel Univeristy,
R. Michael Warton, York University
Detecting Analogous Learning
Ken Wellsch, Marlene Junes of University of Waterloo
GUMS: A General User Modeling System
Tim Finin, University of Pennsylvania
Dave Drager, Arity Corporation
An Efficient Tableau-Based Theorem Prover
Franz Oppacher, Ed Suen of Carleton University
Domain Circumscription Revisited
David Etherington, Universityof British Columbia
Robert Mercer, University of Western Ontario
A Propositional Logic for Natural Kinds
James Delgrande, Simon Fraser University
Fagin and Halpern on Logical Omniscienceses: A Critique with an Alternative
Robert F. Hadley Simon Fraser University
Representing Contextual Dependencies in Discourse
Tomek Strzalkowski, Simon Fraser University
A Domain-Independent Natural Language Database Interface
Yawar Ali, Raymond Aubin, Barry Hall, Bell Northern Research
Natural Language Report Synthesis: An Application to Marine Weather Forecasts
R. Kittredge, A. Polguere of Universite de Montreal
E. Goldberg Environment Canada
What's in an Answer: A Theoretical Perspectiveon Deductive Questioning Answering
Lenhart Schubert, L. Watanabe of University of Alberta
A New Implementation for Generalized Phrase Structure Grammar
Philip Harrison, Michael Maxwell Boeing Artificial Intelligence Center
TRACK: Toward a Robust Natural Language INterface
Sandra Carberry, University of Delaware
Representation of Negative and Incomplete Information in Prolog
Kowk Hung Chan, University of Western Ontario
On the Logic of Representing Dependencies by Graphs,
Judea Pearl of Universityof California
Azaria Paz Technion, Israel Institute of Technology
A proposal of Modal Logic Programming (Extended Abstract)
Seiki Akama, Fujitsu ltd., Japan
Classical Equality and Prolog
E. W. Elcock and P. Hoddinott of University of Western Ontario
Diagnosis of Non-Syntactic Programming Errors in the Scent Advisor
Gordon McCalla, Richard B. Bunt, Janelle J. Harms of University of
Saskatchewan
Using Relative Velocity INformation to Constrain the Motion Correspondence
Problem
Michael Dawson and Zenon Pylyshyn, University of Western Ontario
Device Representation Using Instantiation Rules and Structural Templates
Mingruey R. Taie, Sargur N. Srihari, James Geller, Stuart C. Shapro
of State University of New York at Buffalo
Machine Translation Between Chinese and English
Wanying Jin, University of Texas at Austin
Interword Constraints in Visual Word Recognition
Jonathan J. Hull, State University of New York at Buffalo
Sensitivity to Corners inFlow Paterns
Norah K. Link and STeve Zucker, McGill University
Stable Surface Estimation
Peter T. Sander, STeve Zucker, McGill University
Measuring Motion in Dynamic Images: A Clustering Approach
Amit Bandopadhay and R. Dutta, University of Rochester
Determining the 3-D Motion of a Rigid Surface Patch without Correspondence,
Under Perspective Projection
John Aloimonos and Isidore Rigoutsos, University of Rochester
Active Navigation
Amit Bandopadhay, Barun Chandra and Dana H. Ballard, University of Rochester
Combining Visual and Tactile Perception for Robotics
J. C. Rodger and Roger A. Browse, Queens University
Observation on the Role of Constraints in Problem Solving
Mark Fox of Carnegie-Mellon University
Rule Interaction in Expert System Knowledge Bases
Stan Raatz, University of Pennsylvania
George Drastal, Rutgers University
Towards User specific Explanations from Expert Systems
Peter van Beek and Robin Cohen, University of Waterloo
DIALECT: An Expert Assistant for Informatin REtrieval
Jeane-Claude Bassano, Universite de Paris-Sud
Subdivision of Knowledge for Igneous Rock Identification
Brian W. Otis, MIT Lincoln Lab
Eugene Freuder, University of New Hampshire
A Hybrid, Decidable, Logic-Based Knowledge Representation System
Peter Patel-Schneider, Schlumberger Palo Alto Research
The Generalized-Concept Formalism: A Frames and Logic Based Representation
Model
Mira Balaban, State University of New York at Albany
Knowledge Modules vs Knowledge-Bases: A Structure for Representing the
Granularity of Real-World Knowledge
Diego Lo Giudice and Piero Scaruffi, Olivetti Artificial Intelligence Center,
Italy
Reasoning in a Hierarchy of Deontic Defaults
Frank M. Brown, Universityof Kansas
Belief Revision in SNeps
Joao P. Martins Instituto Superior Tecnico, Portugal
Stuart C. Shapiro, State University of New York at Buffalo
GENIAL: Un Generateur d'Interface en Langue Naturelle
Bertrand Pelletier et Jean Vaucher, Universite de Montreal
Towards a Domain-Independent Method of Comparing Search Algorithm Run-times
H. W. Davis, R. B. Polack, D. J. Golden of Wright State University
Properties of Greedily Optimized Ordering Problems
Rina Dechter, Avi Dechter, University of California, Los Angeles
Mechanisms in ISFI: A Technical Overview (Short Form)
Gary A. Cleveland TheMITRE Corp.
Un Systeme Formel de Caracterisation de L'Evolution des Connaissances
Eugene Chouraqui, Centre National de la Recherche Scientifique
Une Experience de l'Ingenierie de la Connaissance: CODIAPSY Developpe avec
HAMEX
Michel Maury, A. M. Massote, Henri Betaille, J. C. Penochet et Michelle
Negre of CRIME et GRIP, Montpellier, France
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Report on University of Waterloo Research on Logic Mediated Knowledge
Based Personal Information Systems
They received a 3 year $450,000 grant. They will prototype Theorist, a PROLOG
based system, in which they will implement a diagnostic system with natural
language interface for complex system, a system to diagnose children's
reading disabilities. They will also develop a new Prolog in which
to write Theorist.
This group has already implemented DLOG, a "logic-based knowledge representation
sytem", two Prologs (one of which will be distributed by University of
wAterloo's Computer System Group), designed Theorist, implemented an expert
system for diagnosing reading disabilities (which will be redone in Theoritst)
and designed a new architecture for Prolog, and implemented Concurrent Prolog.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Reviews of John Haugeland's "Artificial Intelligence: The Very Idea"
"The Connection Machine" by W W. Daniel HIllis, "Models of the Visual
Cortex" by David Rose and Vernon G. Dobson
------------------------------
Date: 25 Sep 86 08:12:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Intelligence and Representation
This is in response to some points raised by Charles Kalish -
Allow a somewhat lengthy re-quotation to set the stage:
I think that Dennet (see "Brainstorms") is right in that intentions
are something we ascribe to systems and not something that is built in
or a part of that system. The problem then becomes justifying the use
of intentional descriptions for a machine; i.e. how can I justify my
claim that "the computer wants to take the opponent's queen" when the
skeptic responds that all that is happening is that the X procedure
has returned a value which causes the Y procedure to move piece A to
board position Q?...
I think the crucial issue in this question is how much (or whether)
the computer understands. The problem with systems now is that it is
too easy to say that the computer doesn't understand anything, it's
just manipulating markers. That is that any understanding is just
conventional -- we pretend that variable A means the Red Queen, but it
only means that to us (observers) not to the computer. ...
[Pirron's] idea is that you want to ground the computer's use of
symbols in some non-symbolic experience....
One is looking for pre-symbolic, biological constraints; Something
like Rosch's theory of basic levels of conceptualization. ....
The other point is that maybe we do have to stay within this symbolic
"prison-house" after all event the biological concepts are still
represented, not actual (no food in the brain just neuron firings).
The thing here is that, even though you could look into a person's
brain and, say, pick out the neural representation of a horse, to the
person with the open skull that's not a representation, it constitutes
a horse, it is a horse (from the point of view of the neural sytem).
And that's what's different about people and computers. ...
These seem to me the right sorts of questions to be asking - here's a stab
at a partial answer.
We should start with a clear notion of "representation" - what does it mean
to say that the word "rock" represents a rock, or that a picture of a rock
represents a rock, or that a Lisp symbol represents a chess piece?
I think Dennett would agree that X represents Y only relative to some
contextual language (very broadly construed as any halfway-coherent
set of correspondence rules), hopefully with the presence of
an interpreter. Eg, "rock" means rock in English to English-speakers.
opp-queen means opponent's queen in the mini-language set up by the
chess-playing program, as understood by the author. To see the point a
bit more, consider the word "rock" neatly typed out on a piece of paper
in a universe in which the English language does not and never will exist.
Or consider a computer running a chess-playing program (maybe against
another machine, if you like) in a universe devoid of conscious entities.
I would contend that such entities do not represent anything.
So, roughly, representation is a 4-place relation:
R(representer, represented, language interpreter)
"rock" a rock English people
picture of rock a rock visual similarity people,
maybe some animals
...
and so on.
Now... what seems to me to be different about people and computers is that
in the case of computers, meaning is derivative and conventional, whereas
for people it seems intrinsic and natural. (Huh?) ie, Searle's point is
well taken that even after we get the chess-playing program running, it
is still we who must be around to impute meaning to the opp-queen Lisp
symbol. And furthermore, the symbol could just as easily have been
queen-of-opponent. So for the four places of the representation relation
to get filled out, to ground the flying symbols, we still need people
to "watch" the two machines. By contrast two humans can have a perfectly
valid game of chess all by themselves, even if they're Adam and Eve.
Now people certainly make use of conventional as well as natural
symbol systems (like English, frinstance). But other representers in
our heads (like the perception of a horse, however encoded neurally).
seem to *intrinsically* represent. Ie, for the representation
relation, if "my perception of a horse" is the representer, and the
horse out there in the field is the represented thing, the language
seems to be a "special", natural one namely the-language-of-normal-
veridical-perception. (BTW, it's *not* the case, as stated in
Charles's original posting that the perception simply is the horse -
we are *not* different from computers with respect to
the-use-of-internal-things-to-represent-external-things.)
Further, it doesn't seem to make much sense at all to speak of an
"interpreter". If *I* see a horse, it seems a bit schizophrenic to
think of another part of myself as having to interpret that
perception. In any event, note that this is self-interpretation.
So people seem to be autonomous interpreters in a way that computers
are not (at least not yet). In Dennett's terminology, it seems that
I (and you) have the authority to adopt an intentional stance towards
various things (chess-playing machines, ailist readers, etc.),
*including* ourselves - certainly computers do not yet have this
"authority" to designate other things, much less themselves,
as intentional subjects.
Please treat the above as speculation, not as some kind of air-tight
argument (no danger of that anyway, right?)
John Cugini <Cugini@NBS-VMS>
------------------------------
Date: Thu 25 Sep 86 10:24:01-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Subject: Emergent Consciousness
Recent philosophical discussions on consciousness and intentionality
have made me wonder about the analogy between Man and Bureaucracy.
Imagine a large corporation. Without knowing the full internal chain of
command, an external observer could still deduce many of the following
characteristics.
1) The corporation is composed of hundreds of nearly identical units
(known as personnel), most of whom perform material-handling
or information-handling tasks. Although the tasks differ, the
processing units are essentially interchangeable.
2) The "intelligence" of this system is distributed -- proper functioning
of the organization requires cooperative action by many rational agents.
Many tasks can be carried out by small cliques of personnel without
coming to the attention of the rest of the system. Other tasks require
the cooperation of all elements.
3) Despite the similarity of the personnel, some are more "central" or
important than others. A reporter trying to discover what the
organization is "doing" or "planning" would not be content to talk
with a janitor or receptionist. Even the internal personnel recognize
this, and most would pass important queries or problems to more central
personnel rather than presume to discuss or set policy themselves.
4) The official corporate spokesman may be in contact with the most
central elements, but is not himself central. The spokesman is only
an output channel for decisions that occur much deeper or perhaps in a
distributed manner. Many other personnel seem to function as inputs or
effectors rather than decision makers.
5) The chief executive officer (CEO) or perhaps the chairman of the board
may regard the corporation as a personal extension. This individual
seems to be the most central, the "consciousness" of the organization.
To paraphrase Louis XV, "I am the state."
It seems, therefore, that the organization has not only a distributed
intelligence but a localized consciousness. Certain processing elements
and their own thought processes control the overall behavior of the
bureaucracy in a special way, even though these elements (e.g., the CEO)
are physiologically indistinguishable from other personnel. They are
regarded as the seat of corporate consciousness by outsiders, insiders,
and themselves.
Consciousness is thus related to organizational function and information
flow rather than to personal function and characteristics. By analogy,
it is quite possible that the human brain contains a cluster of simple
neural "circuits" that constitute the seat of consciousness, even though
these circuits are indistinguishable in form and individual functioning
from all the other circuits in the brain. This central core, because of
its monitoring and control of the whole organism, has the right to
consider itself the sole autonomous agent. Other portions of the brain
would reject their own autonomy if they were equipped to even consider
the matter.
I thus regard consciousness as a natural emergent property of hierarchical
systems (and perhaps of other distributed systems). There is no need to
postulate a mind/body dualism or a separate soul. I can't explain how
this consciousness arises, nor am I comfortable with the paradox. But I
know that it does arise in any hierarchical organization of cooperating
rational agents, and I suspect that it can also arise in similar organizations
of nonrational agents such as neural nets or computer circuitry.
-- Ken Laws
------------------------------
Date: 25 Sep 1986 1626-EDT
From: Bruce Krulwich <KRULWICH@C.CS.CMU.EDU>
Subject: semantic knowledge
howdy.
i think there was a discussion on searle that i missed a month or so
ago, so this may be rehash. i disagree with searle's basic conjecture
that he bases all of his logic on, namely that since computers represent
everything in terms of 1's and 0's they are by definition storing
knowledge syntactically and not semantically. this seems wrong to me.
as a simple counterexample, consider any old integer stored within a
computer. it may be stored as a string of bits, but the program
implicitely has the "semantic" knowledge that it is an integer.
similarly, the stored activation levels and connection strengths in a
connectionist model simulator (or better, in a true hardware
implementation) may be stored as a bunch of munerical values, but the
software (ie, the model, not the simulator) semantically "knows" what
each value is just as the brain knows the meaning of activation patterns
over neurons and synapses (or so goes the theory).
i think the same can be said for data stored in a more conventional AI
program. in response to a recent post, i don't think that there is a
fundamental difference between a human's knowledge of a horse and a
computers manipulation of the symbol it is using to represent it. the
only differences are the inherently associative nature of the brain and
the amount of knowledge stored in the brain. i think that it is these
two things that give us a "feel" for what a horse is when we think of
one, while most computer systems would make a small fraction of the
associations and would have much less knowledge and experience to
associate with. these are both computational differences, not
fundamental ones.
none of this is to say that we are close or getting close to a seriously
"intelligent" computer system. i just don't think that there are
fundamental philosophical barriers in our way.
bruce krulwich
arpa: krulwich@c.cs.cmu.edu
bitnet: krulwich%c.cs.cmu.edu@cmccvma
------------------------------
End of AIList Digest
********************
∂29-Sep-86 0011 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #200
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 29 Sep 86 00:11:33 PDT
Date: Sun 28 Sep 1986 21:38-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #200
To: AIList@SRI-STRIPE
AIList Digest Monday, 29 Sep 1986 Volume 4 : Issue 200
Today's Topics:
Seminars - Chemical Structure Generation (SU) &
Fuzzy Relational Databases (SMU) &
General Logic (MIT) &
Generic Tasks in Knowledge-Based Reasoning (MIT),
Conference - Workshop on Qualitative Physics
----------------------------------------------------------------------
Date: Mon 22 Sep 86 23:39:33-PDT
From: Olivier Lichtarge <LICHTARGE@SUMEX-AIM.ARPA>
Subject: Seminar - Chemical Structure Generation (SU)
I will be presenting my thesis defense in biophysics Thursday
September 25 in the chemistry Gazebo, starting at 2:15.
Solution Structure Determination of Beta-endorphin by NMR
and
Validation of Protean: a Structure Generation Expert System
Solution structure determination of proteins by Nuclear Magnetic
Resonance involves two steps. First, the collection and interpretation
of data, from which the secondary structure of a protein is
characterized and a set of constraints on its tertiary structure
identified. Secondly, the generation of 3-dimensional models of the
protein which satisfy these constraints. This thesis presents works in
both these areas: one and two-dimensional NMR techniques are applied
to study the conformation of @g(b)-endorphin; and Protean, a new
structure generation expert system is introduced and validated by
testing its performance on myoglobin.
It will be shown that @g(b)-endorphin is a random coil in water. In
a 60% methanol and 40% water mixed solvent the following changes take
place: an @g(a)-helix is induced between residues 14 and 27, and a
salt bridge forms between Lysine28 and Glutamate31, however, there
still exists no strong evidence for the presence of tertiary structure.
The validation of Protean establishes it as an unbiased and accurate
method of generating a representative sampling of all the possible
conformations which satisfy the experimental data. At the solid level,
the precision is good enough to clearly define the topology of the
protein. An analysis of Protean's performance using data sets of
dismal to ideal quality permits us to define the limits of the
precision with which a structure can be determined by this method.
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Seminar - Fuzzy Relational Databases (SMU)
Design of Similarity-Based (Fuzzy) Relational Databases
Speaker: Bill P. Buckles, University of Texas, Arlington
Location 315 SIC, Southern Methodist University
Time: 2:00PM
While the core of an expert system is its inference mechanism, a
common component is a database or other form of knowledge
representation. The authors have developed a variation of the
relational database model in which data that is linguistic or
inherently uncertain may be represented. The keystone concept of
this representation is the replacement of the relationship " is
equivalent to" with the relationship "is similar to". Similarity is
defined in fuzzy set theory as an $n sup 2$ relationship over a
domain D, |D| = n such that
i. s(x,x)=1, x member D
ii. s(x,y)=s(y,x) x,y member D
iii. s(x,y) >= max[min(s(x,y),s(y,z))]; x, y, z member D
Beginning with a universal relation, a method is given for developing
the domain sets, similarity relationships and base relations for a
similarity-based relational database. The universal relation itself
enumerates all domains. The domain sets may be numeric (in which case
no further design is needed) or scalar (in which case the selection of
a comprehensive scalar set is needed). Similarity relationship
contains $n sup 2$ values where n is the number of scalars in a domain
set. A method is described for developing a set of consistent values
when initially given n-1 values. The base relations are derived using
fuzzy functional dependencies. This step also requires the
identification of candidate keys.
------------------------------
Date: Fri 26 Sep 86 10:47:21-EDT
From: Lisa F. Melcher <LISA@XX.LCS.MIT.EDU>
Subject: Seminar - General Logic (MIT)
Date: Thursday, October 2, 1986
Time: 1:45 p.m......Refreshments
Time: 2:00 p.m......Lecture
Place: NE43 - 512A
" GENERAL LOGIC "
Gordon Plotkin
Department of Computer Science
University of Edinburgh, Scotland
A good many logics have been proposed for use in Computer Science.
Implementing them involves repeating a great deal of work. We propose a
general account of logics as regards both their syntax and inference rules.
As immediate target we envision a system to which one inputs a logic
obtaining a simple proof-checker. The ideas build on work in logic of
Paulson, Martin-Lof and Schroeder-Heister and in the typed lambda-calculus of
Huet and Coquand and Meyer and Reinhold. The slogan is: Judgements are
Types. For example the judgement that a proposition is true is identified
with its type of proofs; general and hypothetical judgements are identified
with dependent product types. This gives one account of Natural Deduction.
It would be interesting to extend the work to consider (two-sided) sequent
calculi for classical and modal logics.
Sponsored by TOC, Laboratory for Computer Science
Albert Meyer, Host
------------------------------
Date: Fri 26 Sep 86 14:47:36-EDT
From: Rosemary B. Hegg <ROSIE@XX.LCS.MIT.EDU>
Subject: Seminar - Generic Tasks in Knowledge-Based Reasoning (MIT)
Date: Wednesday, October 1, 1986
Time: 2.45 pm....Refreshments
3.00 pm....Lecture
Place: NE43-512A
GENERIC TASKS IN KNOWLEDGE-BASED REASONING:
CHARACTERIZING AND DESIGNING EXPERT SYSTEMS AT THE
``RIGHT'' LEVEL OF ABSTRACTION
B. CHANDRASEKARAN
Laboratory for Artificial Intelligence Research
Department of Computer and Information Science
The Ohio State University
Columbus, Ohio 43210
We outline the elements of a framework for expert system design that
we have been developing in our research group over the last several years.
This framework is based on the claim that complex knowledge-based reasoning
tasks can often be decomposed into a number of @i(generic tasks each
with associated types of knowledge and family of control regimes). At
different stages in reasoning, the system will typically engage in
one of the tasks, depending upon the knowledge available and the state
of problem solving. The advantages of this point of view are manifold:
(i) Since typically the generic tasks are at a much higher level of abstraction
than those associated with first generation expert system languages,
knowledge can be acquired and represented directly at the level appropriate to
the information processing task. (ii) Since each of the generic tasks
has an appropriate control regime, problem solving behavior may be
more perspicuously encoded. (iii) Because of a richer generic vocabulary
in terms of which knowledge and control are represented, explanation of
problem solving behavior is also more perspicuous. We briefly
describe six generic tasks that we have found very useful in our
work on knowledge-based reasoning: classification, state abstraction,
knowledge-directed retrieval, object synthesis by plan selection and
refinement,
hypothesis matching, and assembly of compound hypotheses for
abduction.
Host: Prof. Peter Szolovits
------------------------------
Date: Fri, 26 Sep 86 12:41:26 CDT
From: forbus@p.cs.uiuc.edu (Kenneth Forbus)
Subject: Conference - Workshop on Qualitative Physics
Call for Participation
Workshop on Qualitative Physics
May 27-29, 1987
Urbana, Illinois
Sponsored by:
the American Association for Artificial Intelligence
and
Qualitative Reasoning Group
University of Illinois at Urbana-Champaign
Organizing Committee:
Ken Forbus (University of Illinois)
Johan de Kleer (Xerox PARC)
Jeff Shrager (Xerox PARC)
Dan Weld (MIT AI Lab)
Objectives:
Qualitative Physics, the subarea of artificial intelligence concerned with
formalizing reasoning about the physical world, has become an important and
rapidly expanding topic of research. The goal of this workshop is to
provide an opportunity for researchers in the area to communicate results
and exchange ideas. Relevant topics of discussion include:
-- Foundational research in qualitative physics
-- Implementation techniques
-- Applications of qualitative physics
-- Connections with other areas of AI
(e.g., machine learning, robotics)
Attendance: Attendence at the workshop will be limited in order to maximize
interaction. Consequently, attendence will be by invitation only. If you
are interested in attending, please submit an extended abstract (no more
than six pages) describing the work you wish to present. The extended
abstracts will be reviewed by the organizing committee. No proceedings will
be published; however, a selected subset of attendees will be invited to
contribute papers to a special issue of the International Journal of
Artificial Intelligence in Engineering. There will be financial assistance
for graduate students who are invited to attend.
Requirements:
The deadline for submitting extended abstracts is February 10th. On-line
submissions are not allowed; hard copy only please. Any submission over 6
pages or rendered unreadable due to poor printer quality or microscopic font
size will not be reviewed. Since no proceedings will be produced, abstracts
describing papers submitted to AAAI-87 are acceptable. Invitations will be
sent out on March 1st. Please send 6 copies of your extended abstracts to:
Kenneth D. Forbus
Qualitative Reasoning Group
University of Illinois
1304 W. Springfield Avenue
Urbana, Illinois, 61801
------------------------------
End of AIList Digest
********************
∂29-Sep-86 0153 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #201
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 29 Sep 86 01:53:43 PDT
Date: Sun 28 Sep 1986 22:30-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #201
To: AIList@SRI-STRIPE
AIList Digest Monday, 29 Sep 1986 Volume 4 : Issue 201
Today's Topics:
Bibliography - Definitions & Recent Articles in Robotics and Vision
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Defs for ai.bib35, new keyword code for editorials
D MAG39 Computer Aided Design\
%V 18\
%N 3\
%D APR 1986
D MAG40 Automation and Remote Control\
%V 46\
%N 9 Part 2\
%D SEP 1985
D MAG41 IEEE Transactions on Industrial Electronics\
%V 33\
%N 2\
%D MAY 1986
D MAG42 Soviet Journal of Computer and Systems Sciences\
%V 23\
%N 6\
%D NOV-DEC 1985
D MAG43 Journal of Symbolic Computation\
%V 2\
%N 1\
%D MARCH 1986
D MAG44 Image and Vision Computing\
%V 3\
%N 4\
%D NOV 1985
D BOOK42 Second Conference on Software Development Tools, Techniques and Altern
atives\
%I IEEE Computer Society Press\
%C Washington\
%D 1985
D BOOK43 Fundamentals of Computation Theory (Cottbus)\
%S Lecture Notes in Computer Science\
%V 199\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D BOOK44 Robot Sensors, Volume 1 (Vision)\
%I IFS Publications\
%C Bedford\
%D 1986
D BOOK45 Robot Sensors, Volume 2 (Tactile and Non-Vision)\
%I IFS Publications\
%C Bedford\
%D 1986
D MAG45 Journal of Logic Programming\
%V 2\
%D 1985\
%N 3
D BOOK46 Advances in Cryptology\
%S Lecture Notes in Computer Science\
%V 196\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D BOOK47 Mathematical Foundations of Software Development V 1\
%S Lecture Notes in Computer Science\
%V 185\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D MAG46 Proceedings of the 44th Session of the International
Statistical Institute\
%V 1\
%D 1983
D BOOK48 Seminar Notes on Concurrency\
%S Lecture Notes in Computer Science\
%V 197\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D MAG47 Proceedings of the Conference "Algebra and Logic"\
%D 1984\
%C Zagreb
D MAG48 Pattern Recognition\
%V 19\
%N 2\
%D 1986
D MAG49 IEEE Transactions on Geoscience and Remote Sensing\
%V 24\
%N 3\
%D MAY 1986
D MAG50 Information and Control\
%V 67\
%N 1-3\
%D OCT-DEC 1985
D MAG51 Kybernetes\
%V 15\
%N 2\
%D 1986
D MAG52 Data Processing\
%V 28\
%N 3\
%D APR 1986
D MAG53 J. Tsinghua Univ.\
%V 25\
%D 1985\
%N 2
D MAG54 Logique et. Anal (n. S.)\
%V 28\
%D 1985\
%N 110-111
D MAG55 Werkstattstechnik wt Zeitschrift fur Industrielle Fertigung\
%V 76\
%N 5\
%D MAY 1986
D MAG56 Robotica\
%V 4\
%D APR 1986
D MAG57 International Journal of Man Machine Studies\
%V 24\
%N 1\
%D JAN 1986
D MAG58 Computer Vision, Graphics and Image Processing\
%V 34\
%N 1\
%D APR 1986
D BOOK49 Flexible Manufacturing Systems: Methods and Studies\
%S Studies in Management Science and Systems\
%V 12\
%I North Holland Publishing Company\
%C Amsterdam\
%D 1986
D MAG59 International Journal for Robotics Research\
%V 5\
%N 1\
%D Spring 1986
D BOOK50 International Symposium on Logic Programming\
%D 1984
D MAG61 Proceedings of the 1986 Symposium on Symbolic and\
Algebriaic Computation\
%D JUL 21-23 1986
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
A new keyword code for article types has been added, AT22, which is for
editorials.
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Articles in Robotics and Vision
%A Kunwoo Lee
%A Daniel A. Tortorelli
%T Computer-aided Design of Robotic Manipulators
%J MAG39
%P 139-146
%K AI07
%A Ho Bin
%T Inputting Constructive Solid Geometry Representations Directly from 2D
Orthographic Engineering Drawings
%J MAG39
%P 147-155
%K AA05
%A T. H. Richards
%A G. C. Onwubolu
%T Automatic Interpretation of Engineering Drawings for 3D Surface
Representation in CAD
%J MAG39
%P 156-160
%K AA05
%A J. S. Arora
%A G. Baenziger
%T Uses of Artificial Intelligence in Design Optimization
%J Computer Methods in Mechanics and Engineering
%V 54
%N 3
%D MAR 1986
%P 303-324
%K AA05
%A V. N. Burkov
%A V. V. Tayganov
%T Adaptive Functioning Mechanisms of Active Systems. I. Active
Identification and Progressive Mechanisms
%J MAG40
%P 1141-1146
%K AA20 AI09 AI04 AI08 AI13
%A A. A. Zlatopolskii
%T Image Segmentation along Discontinuous Boundaries
%J MAG40
%P 1160-1167
%K AI06
%A E. B. Yanovskaya
%T Axiomatic Characterization of the Maxmin and the Lexicographic Maxmin
Solution in Bargaining Schemes
%J MAG40
%P 1177-1185
%K AI02 AI03 AA11
%A Yu. V. Malyshenko
%T Estimating and Minimizing Diagnostic Information when Troubleshooting an
Analog Device
%J MAG40
%P 1192-1195
%K AA04 AA21
%A G. Hirzinger
%T Robot Systems Completely Based on Sensory Feedback
%J MAG41
%P 105-109
%K AI07 AI06
%A Y. Y. Hung
%A S. K. Cheng
%A N. K. Loh
%T A Computer Vision Techniques for Surface Curvature Gaging with
Project Grating
%J MAG41
%P 158-161
%K AI07 AI06
%A Zvi Galil
%T Optimal Prallel Algorithms for String Matching
%J Information and Control
%V 67
%N 1-3
%D 1985
%P 144-157
%K O06
%A E. Tanic
%T Urban Planning and Artificial Intelligence - The Urbys System
%J Computers, Environment and Urban Systems
%V 10
%N 3-4
%D 1986
%P 135-146
%K AA11
%A B. M. Shtilman
%T A Formal Linguistic Model for Solving Discrete Optimization Problems. II.
The Language of Zones, Translations and the Boundary Problem
%J MAG42
%P 17-28
%K AI02 AA05
%A V. A. Abramov
%A A. I. Piskunov
%A Yu. T. Rubanik
%T A Modification to the Bellman-Zadeh Multistep Procedure for Decision Making
under Fuzzy Conditions for Microelectronic Systems
%J MAG42
%P 143-151
%K AI13 O05
%A James L. Eilbert
%A Richard M. Salter
%T Modeling Neural Networks in Scheme
%J Simulation
%V 46
%D 1986
%N 5
%P 193
%K AI12 T01
%A E. A. Shingareva
%T Semiotic Basis of the Pragmatic Approach to Recognition of the Text Meaning
%J Nauchno-Tekhnicheskaya Informatsiya, Seriya II- Informatisionnye Protessy I
Sistemy
%N 3
%D 1986
%K AI02
%A T. Kim
%A K. Chwa
%T Parallel Algorithms for a Depth First Search and a Breadth First Search
%J International Journal of Computer Mathematics
%V 19
%N 1
%D 1986
%P 39-56
%K AI03 H03
%A Hsu-Pin Wang
%A Richard A. Wysk
%T An Expert System for Machining Data Selection
%J Computers and Industrial Engineering
%V 10
%N 2
%D 1986
%K AA26 AI01
%A L. R. Rabiner
%A F. K. Soong
%T Single-Frame Vowel Recognition Using Vector Quantization with Several
Distance Measures
%J AT&T Technical Journal
%V 64
%N 10
%D DEC 1985
%P 2319-2330
%K AI05
%A A. Pasztor
%T Non-Standard Algorithmic and Dynamic Logics
%J MAG43
%P 59-82
%A Alex P. Pentland
%T On Describing Complex Surface Shapes
%J MAG44
%P 153-162
%K AI06 AI16
%A B. F. Buxton
%A D. W. Murray
%T Optic Flow Segmentation as an Ill-posed and Maximum Likelihood Problem
%J MAG44
%P 163-169
%K AI06
%A M. C. Ibison
%A L. Zapalowski
%A C. G. Harris
%T Direct Surface Reconstruction from a Moving Sensor
%J MAG44
%P 170-176
%K AI06
%A S. A. Lloyd
%T Binary Stereo Algorithm Based on the Disparity-Gradient Limit and Using
Optimization Theory
%J MAG44
%P 177-182
%K AI06
%A Andrew Blake
%A Andrew Zimmerman
%A Greg Knowles
%T Surface Descriptions from Stereo and Shading
%J MAG44
%P 183-196
%K AI06
%A G. D. Sullivan
%A K. D. Baker
%A J. A D. W. Anderson
%T Use of Multiple Difference-of-Gaussian Filters to Verify Geometric
Models
%J MAG44
%P 192-197
%K AI06
%A J. Hyde
%A J. A. Fullwood
%A D. R. Corrall
%T An Approach to Knowledge Driven Segmentation
%J MAG44
%P 198-205
%K AI06
%A J. Kittler
%A J. Illingworth
%T Relaxation Labelling Algorithm - A Review
%J MAG44
%P 206-216
%K AI06 AT08
%A R. T. Ritchings
%A A. C. F. Colchester
%A H. Q. Wang
%T Knowledge Based Analysis of Carotid Angiograms
%J MAG44
%P 217
%K AI06 AA01
%A W. L. Mcknight
%T Use of Grammar Templates for Software Engineering Environments
%J BOOK42
%P 56-66
%K AA08
%A M. T. Harandi
%A M. D. Lubars
%T A Knowledge Based Design Aid for Software Systems
%J BOOK42
%P 67-74
%K AA08
%A Y. Takefuji
%T AI Based General Purpose Cross Assembler
%J BOOK42
%P 75-85
%K AA08
%A R. N. Cronk
%A D. V. Zelinski
%T ES/AG System Generation Environment for Intelligent Application Software
%J BOOK42
%P 96-100
%K AA08
%A B. Friman
%T X - A Tool for Prototyping Through Examples
%J BOOK42
%P 141-148
%K AA08
%A D. Hammerslag
%A S. N. Kamin
%A R. H. Campbell
%T Tree-Oriented Interactive Processing with an Application to Theorem-Proving
%J BOOK42
%P 199-206
%K AA08 AI11
%A Gudmund Frandsen
%T Logic Programming and Substitutions
%B BOOK43
%P 146-158
%K AI10
%A H. J. Cho
%A C. K. Un
%T On Reducing Computational Complexity in Connected Digit Recognition by the
Frame Labeling Method
%J Proceedings of the IEEE
%V 74
%N 4
%D APR 1986
%P 614-615
%K AI06
%A Vijay Gehlot
%A Y. N. Srikant
%T An Interpreter for SLIPS - An Applicative Language Based on Lambda-Calculus
%J Computer Languages
%V 11
%N 1
%P 1-14
%D 1986
%A Sharon D. Stewart
%T Expert System Invades Military
%J Simulation
%V 46
%N 2
%D FEB 1986
%P 69
%K AI01 AA18
%A F. C. Hadipriono
%A H. S. toh
%T Approximate Reasoning Models for Consequences on Structural Component Due to
Failure Events
%J Civil Engineering Pract Design Engineering
%V 5
%N 3
%D 1986
%P 155-170
%K AA05 AA21 O04
%A J. Tymowski
%T Industrial Robots
%J Mechanik
%V 58
%N 10
%D 1985
%P 493-496
%K AI07
%X (in Polish with English, Polish, Russian and German summaries)
%A Dieter Schutt
%T Expert Systems - Forerunners of a New Technology
%J Siemens Review
%V 55
%N 1
%D JAN- FEB 1986
%P 30
%K AI01
%A H. Kasamatu
%A S. Omatu
%T Edge-Preserving Restoration of Noisy Images
%J International Journal of Systems Sciences
%V 17
%N 6
%D JUN 1985
%P 833-842
%K AI06
%A A. Pugh
%T Robot Sensors - A Personal View
%B BOOK44
%P 3-14
%K AI07
%A L. J. Pinson
%T Robot Vision - An Evaluation of Imaging Sensors
%B BOOK44
%P 15-66
%K AI07 AI06
%A D. G. Whitehead
%A I. Mitchell
%A P. V. Mellor
%T A Low-Resolution Vision Sensor
%B BOOK44
%P 67-74
%K AI06
%A J. E. Orrock
%A J. H. Garfunkel
%A B. A. Owen
%T An Integrated Vision/Range Sensor
%B BOOK44
%P 75-84
%K AI06
%A S. Baird
%A M. Lurie
%T Precise Robotic Assembly Using Vision in the Hand
%B BOOK44
%P 85-94
%K AI06 AI07 AA26
%A C. Loughlin
%A J. Morris
%T Line, Edge and Contour Following with Eye-in-Hand Vision
%B BOOK44
%P 95-102
%K AI06 AI07
%A P. P. L. Regtien
%A R. F. Wolffenbuttel
%T A Novel Solid-State Colour Sensor Suitable for Robotic Applicatinos
%B BOOK44
%P 103-114
%K AI06 AI07
%A A. Agrawal
%A M. Epstein
%T Robot Eye-in-Hand Using Fibre Optics
%B BOOK44
%P 115-126
%K AI06 AI07
%A P. A. Fehrenbach
%T Optical Alignment of Dual-in-Line Components for Assembly
%B BOOK44
%P 127-138
%K AI06 AI07 AA26 AA04
%A Da Fa Li
%T Semantically Positive Unit Resolution for Horn Sets
%J MAG53
%P 88-91
%K AI10
%X Chinese with English Summary
%A V. S. Neiman
%T Proof Search without Repeated Examination of Subgoals
%J Dokl. Akad. Nauk SSSR
%V 286
%D 1986
%N 5
%P 1065-1068
%K AI11
%X Russian
%A A. Colmerauer
%T About Natural Logic. Automated Reasoning in Nonclassical Logic
%J MAG54
%P 209-231
%K AI11
%A Ulf Grenander
%T Pictures as Complex Systems
%B Complexity, Language and Life: Mathematical Approaches
%S Biomathematics
%V 16
%I Spring
%C Berlin-Heidelberg-New York
%D 1986
%P 62-87
%K AI06
%A G. E. Mints
%T Resolution Calculi for Nonclassical Logics
%J Semiotics and Information Science
%V 25
%P 120-135
%D 1985
%K AI11
%X Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekn. Inform., Moscow
(in Russian)
%A Charles G. Margon
%T Autologic. Automated Reasoning in Nonclassical Logic
%J MAG54
%P 257-282
%K AI11
%A B. M. Shtilman
%T A Formal Linguistic Model for Solving Discrete Optimization Problems I.
Optimization tools. Language of Trajectories
%J Soviet J. Computer Systems Science
%V 23
%D 1985
%N 5
%P 53-64
%A David Lee
%T Optimal Algorithms for Image Understanding: Current Status and Future Plans
%J J. Complexity
%V 1
%D 1985
%N 1
%P 138-146
%K AI06
%A Douglas B. West
%A Prithviraj Banerjee
%T Partial Matching in Degree-Restricted Bipartite Graphs
%J Proceedings of the Sixteenth Southeastern International Conference on
Combinatorics, Graph Theory and Computing
%P 259-266
%D 1985
%K O06
%A Kyota Aoki
%A N. Mugibayashi
%T Cellular Automata and Coupled Chaos Developed in Lattice Chain of N
Equivalent Switching Elements
%J Phys. Lett. A
%V 114
%D 1986
%N 8-9
%P 425-429
%K AI12
%A R. J. R. Back
%T A Computational Interpretation of Truth Logic
%J Synthese
%V 66
%D 1986
%N 1
%P 15-34
%A Max Michel
%T Computation of Tempral Operators: Automated Reasoning in Nonclassical Logic
%J MAG54
%P 137-152
%K AI11
%A H. J. Warnecke
%A A. Altenhein
%T 2-1/2D Geometry Representation for Collision Avoidance of Industrial
Robots
%J MAG55
%P 269-272
%K AI07
%A W. Jacobi
%T Industrial Robots - Already Sufficiently Flexible for the User
%J MAG55
%P 273-277
%K AI07
%A H. J. Warnecke
%A G. Schiele
%T Measurement Methods for the Determination of Industrial Robot Characteristics
%J MAG55
%P 278-280
%K AI07
%A H. H. Raab
%T Assembly of Multipolar Plug Bonding Boxes in a Programmable Assembly
Cell
%J MAG55
%P 281-283
%K AA26
%A M. Schwiezer
%A E. M. Wolf
%T Strong Increase in Industrial Robot Installation
%J MAG55
%P 286
%K AT04 AI07
%A T. W. Stacey
%A A. E. Middleditch
%T The Geometry of Machining for Computer-aided Manufacture
%J MAG56
%P 83-92
%K AA26
%A S. S. Iyengar
%A C. L. Jorgensen
%A S. U. N. Rao
%A C. R. Weisbin
%T Robot Navigation Algorithms Using Learned Spatial Graphs
%J MAG56
%P 93-100
%K AI07
%A Guy Jamarie
%T On the Use of Time-Varying Intertia Links to Increase the Versatility of
Manipulators
%J MAG56
%P 101-106
%K AI07
%A Eugeny Krustev
%A Ljubomir Lilov
%T Kinematic Path Control of Robot Arms
%J MAG56
%P 107-116
%K AI07
%A Tony Owen
%T Robotics: The Strategic Issues
%J MAG56
%P 117
%K AI07
%A C. H. Cho
%T Expert Systems, Intelligent Devices, Plantwide Control and Self Tuning
Algorithms: An Update on the ISA/86 Technical Program
%J MAG56
%P 69
%K AA20 AI01
%A A. Hutchinson
%T A Data Structure and Algorithm for a Self-Augmenting Heuristic Program
%J The Computer Journal
%P 135-150
%V 29
%N 2
%D APR 1986
%K AI04
%A B. Kosko
%T Fuzzy Cognitive Maps
%J MAG57
%P 65-76
%K AI08 O04
%A C. L. Borgman
%T The Users Mental Model of an Information Retrieval System - An Experiment
on a Prototype Online Catalog
%J MAG57
%P 47-64
%K AI08 AA14
%A D. R. Peachey
%A G. I. Mccalla
%T Using Planning Techniques in Intelligent Tutoring Systems
%J MAG57
%P 77
%K AA07 AI09
%A H. J. Bernstein
%T Determining the Shape of a Convex n-sided Polygon Using 2n+k
Tacticle Probes
%J Information Processing Letters
%V 22
%N 5
%D APR 28, 1986
%P 255-260
%K AI07 O06
%A Fu-Nian Ku
%A Jian-Min Hu
%T A New Approach to the Restoration of an Image Blurred by a Linear
Uniform Motion
%J MAG58
%P 20-34
%K AI06
%A Charles F. Neveu
%A Charles R. Dyer
%A Roland T. Chin
%T Two-Dimensional Object Recognition Using Multiresolution Models
%J MAG58
%P 52-65
%K AI06
%A Keith E. Price
%T Hierarchical Matching Using Relaxation
%J MAG58
%P 66-75
%K AI06
%A Angela Y. Wu
%A S. K. Bhaskar
%A Azriel Rosenfeld
%T Computation of Geometric Properties from the Medial Axis Transform in
O(n log n) Time
%J MAG58
%P 76-92
%K AI06 O06
%A H. B. Bidasaria
%T A Method for Almost Exact Historgram Matching for Two Digitized Images
%J MAG58
%P 93-98
%K AI06 O06
%A Azriel Rosenfled
%T "Expert" Vision Systems: Some Issues
%J MAG58
%P 99-101
%K AI06 AI01
%A John R. Kender
%T Vision Expert Systems Demand Challenging Expert Interactions
%J MAG58
%P 102-103
%K AI06 AI01
%A Makoto Nagao
%T Comment on the Position Paper \*QExpert Vision Systems\*U
%J MAG58
%P 104
%K AI06 AI01
%A Leonard Uhr
%T Workshop on Goal Directed \*QExpert\*U Vision Systems: My Positions
and Comments
%J MAG58
%P 105-108
%K AI06 AI01
%A William B. Thompson
%T Comments on "Expert" Vision Systems: Some Issues
%J MAG58
%P 109-110
%K AI06 AI01
%A V. A. Kovalevsky
%T Dialog on "Expert" Vision Systems: Comments
%J MAG58
%P 111-113
%K AI06 AI01
%A David Sher
%T Expert Systems for Vision Based on Bayes Rule
%J MAG58
%P 114-115
%K AI06 AI01 O04
%A S. Tanimoto
%T The Case for Appropriate Architecture
%J MAG58
%P 116
%K AI06 AI01
%A Azriel Rosenfeld
%T Rosenfeld's Concluding Remarks
%J MAG58
%P 117
%K AI06 AI01
%A Robert M. Haralick
%T "Robot Vision" by Berthold Horn
%J MAG58
%P 118
%K AI06 AI07 AT07
%A K. Shirai
%A K. Mano
%T A Clustering Experiment of the Spectra and the Spectral Changes of Speech
to Extract Phonemic Features
%J MAG58
%P 279-290
%K AI05
%A A. K. Chakravarty
%A A. Shutub
%T Integration of Assembly Robots in a Flexible Assembly System
%B BOOK49
%P 71-88
%K AI07 AA26
%A R. C. Morey
%T Optimizing Versatility in Robotic Assembly Line Design- An Application
%B BOOK49
%P 89-98
%K AI07 AA26
%A J. Grobeiny
%T The Simple Linguistic Approach to Optimization of a Plant Layout by Branch
and Bound
%B BOOK49
%P 141-150
%K AA26 AI02 AI03
%A Z. J. Czjikiewicz
%T Justification of the Robots Applications
%B BOOK49
%P 367-376
%K AI07
%A M. J. P. Shaw
%A A. B. Whinston
%T Applications of Artificial Intelligence to Planning and Scheduling in
Flexible Manufacturing
%B BOOK49
%P 223-242
%K AI07
%A S. Subramanymam
%A R. G. Askin
%T An Expert Systems Approach to Scheduling in Flexible Manufacturing Systems
%B BOOK49
%P 243-256
%K AI07
%A Michael K. Brown
%T The Extraction of Curved Surface Features with Generic Range Sensors
%J MAG59
%P 3-18
%K AI06
%A Michael Erdmann
%T Using Backprojections for Fine Motion Planning with Uncertaintly
%J MAG59
%P 19-45
%K AI07 AI09 O04
%A Katsushi Ikeuchi
%A H. Keith Nishihara
%A Berthold K. P. Horn
%A Patrick Sobalvarro
%A Shigemi Nagati
%T Determining Grasp Configurations Using Photometric Stereo and the PRISM
Binocular Stereo System
%J MAG59
%P 46-65
%K AI06 AI07
%A Dragan Stokic
%A Miomir Vukobratovic
%A Dragan Hristic
%T Implementation of Force Feedback in Manipulation Robots
%J MAG59
%P 66-76
%K AI07
%A Oussama Khatib
%T Real-Time Obstacle Avoidance for Manipulators and Mobile Robots
%J MAG59
%P 90-98
%K AI07 AA19
%A R. Featherstone
%T A Geometric Investigation of Reach
%J MAG59
%P 99
%K AI07 AT07
%A Maria Virginia Aponte
%T Editing First Order Proofs: Programmed Rules vs. Derived Rules
%J BOOK50
%P 92-98
%K AI11
%A Hellfried Bottger
%T Automatic Theorem-Proving with Configuraitons
%J Elektron. Informationsverarb. Kybernet.
%V 21
%N 10-11
%P 523-546
%K AI11
%A D. R. Brough
%A M. H. van Emden
%T Dataflow, Flowcharts and \*QLUCID\*U style Programming in Logic
%J BOOK50
%P 252-258
%A Laurent Fribough
%T Handling Function Definitions Through Innermost Superposition and Rewriting
%B BOOK30
%P 325-344
%A T. Gergely
%A M. Szots
%T Cuttable Formulas for Logic Programming
%J BOOK50
%P 299-310
%A N. N. Leonteva
%T Information Model of the Automatic Translation System
%J Nauchno-Tekhnicheskaya Informatsiya, Seriya II -
Informatsionnye Protsessy I Sistemy
%N 10
%D 1985
%P 22-28
%X in Russian
------------------------------
End of AIList Digest
********************
∂29-Sep-86 0351 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #202
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 29 Sep 86 03:51:45 PDT
Date: Sun 28 Sep 1986 22:39-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #202
To: AIList@SRI-STRIPE
AIList Digest Monday, 29 Sep 1986 Volume 4 : Issue 202
Today's Topics:
Bibliography - Recent Reports
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Recent Reports
%A Ru-qian Lu
%T Expert Union: United Service of Distributed Expert Systems
%R 85-3
%I University of Minnesota-Duluth
%C Duluth, Minnesota
%D June, 1985
%K H03 AI01
%X A scheme for connecting expert systems in a network called an {\nit
expert union} is described. Consultation scheduling algorithms used to
select the appropriate expert(s) to solve problems are proposed, as
are strategies for resolving contradictions.
%A J. C. F. M. Neves
%A G. F. Luger
%A L. F. Amaral
%T Integrating a User's Knowledge into a Knowledge Base Using a Logic
Based Representation
%I University of New Mexico
%R CS85-2
%K AA08 AI10
%A J. C. F. M. Neves
%A G. F. Luger
%T An Automated Reasoning System for Presupposition Analysis
%I University of New Mexico
%R CS85-3
%K AI16
%A J. C. F. M. Neves
%A G. F. Luger
%A J. M. Carvalho
%T A Formalism for Views in a Logic Data Base
%I University of New Mexico
%R CS85-4
%K AA08
%A Franz Winkler
%T A Note on Improving the Complexity of the Knuth-Bendix Completion
Algorithm
%I University of Delaware
%R 85-04
%K AI14
%A Claudio Gutierrez
%T An Integrated Office Environment Under the AI Paradigm
%I University of Delaware
%R 86-03
%K AA06
%A Amir M. Razi
%T An Empirical Study of Robust Natural Language Processing
%I University of Delaware
%R 86-08
%K AI02
%A John T. Lund
%T Multiple Cause Identification in Diagnostic Problem Solving
%I University of Delaware
%R 86-11
%K AA05 AA21
%A D. Nau
%A T.C. Chang
%T Hierarchical Representation of Problem-Solving Knowledge in a Frame-Based
Process Planning System
%I Production Automation Project, University of Rochester
%R TM-50
%C Rochester, New York
%K AA26
%T INEXACT REASONING IN PROLOG-BASED EXPERT SYSTEMS
%A Koenraad G. Lecot
%R CSD-860053
%I University of California, Los Angeles
%K AI01 O04 T02
%$ 13.75
%X Expert systems are only worthy of their name if they can cope in a
consistent and natural way with the uncertainty and vagueness that is
inherent to real world expertise. This thesis explores the current
methodologies, both in the light of their acceptabiity and of their
implementation in the logic programming language Prolog. We treat in depth
the subjective Bayesian approach to inexact reasoning and describe a
meta-level implementation in Prolog. This probabilistic method is compared
with an alternative theory of belief used in Mycin. We describe an
implementation of Mycin's consultation phase. We argue further that the
theory of fuzzy logic is more adequate to describe the uncertainty and
vagueness of real world situations. Fuzzy logic is put in contrast with
the probabilistic approaches and an implementation strategy is described.
%T DISTRIBUTED DIAGNOSIS IN CAUSAL MODELS WITH CONTINUOUS VARIABLES
%A Judea Pearl
%R CSD-860051
%I University of California, Los Angeles
%$ 1.50
%K O04 H03 AA21
%X We consider causal models in which the variables form a linearly coupled
hierarchy, and are subject to Gaussian sources of noise. We show that if
the number of circuits in the hierarchy is small, the impact of each new
piece of evidence can be viewed as a perturbation that propagates through a
network of processors (one per variable) by local communication. This mode
of diagnosis admits flexible control strategies and facilitates the
generation of intuitively meaningful explanations.
%T RELAXATION PROBLEM SOLVING
(with input to Chinese input problem)
%A Kam Pui Chow
%I University of California, Los Angeles
%R CSD-860058
%$ 12.00
%K AI02
%X Two fundamental problem solving techniques are introduced to help automate
the use of relaxation: multilevel frameworks and constraint generation.
They are closely related to iterative relaxation and subproblem relaxation.
.sp 1
In multilevel problem solving, the set of constraints is partitioned
vertically into different levels. Lower level constraints generate
possible solutions while higher level constraints prune the solutions to
reduce the combinatorial explosion. Subproblem relaxation at first relaxes
the high level constraints; the solution is then improved by strengthening
the relaxed constraints.
.sp 1
The constraint generation technique uses iterative relaxation to generate a
set of constraints from a given model. This set of constraints with a
constraint interpreter form an expert system. This is an improvement over
most existing expert systems which require experts to write down their
expertise in rules.
.sp 1
These principles are illustrated by applying them to the Chinese input
problem, which is to transform a phonetic spelling, without word breaks, of
a Chinese sentence into the corresponding Chinese characters. Three
fundamental issues are studied: segmentation, homophone analysis, and
dictionary organization. The problem is partitioned into the following
levels: phonetic spelling, word, and grammar. The corresponding
constraints are legal spellings, legal words, and legal syntactic
structures. Constraints for syntactic structure are generated from a
Chinese grammar.
%T RELAXATION PROCESSES: THEORY, CASE STUDIES AND APPLICATIONS
%A Ching-Tsun Chou
%R CSD-860057
%$ 6.25
%I University of California, Los Angeles
%K O02 T02 AA08
%X Relaxation is a powerful problem-solving paradigm in coping with problems
specified using constraints. In this Thesis we present a study of the
nature of relaxation processes. We begin with identifying certain typical
problems solvable by relaxation. Motivated by these concrete examples, we
develop a formal theory of relaxation processes and design the General
Relaxation Semi-Algorithm for solving general Relaxation Problems. To
strengthen the theory, we do case studies on two relaxation-solvable
problems: the Shortest-Path Problem and Prefix Inequalities. The principal
results of these studies are polynomial-time algorithms for both problems.
The practical usefulness of relaxation is demonstrated by implementing a
program called TYPEINF which employs relaxation techniques to
automatically infer types for Prolog programs. Finally we indicate some
possible directions of future research.
%A J. R. Endsor
%A A. Dickinson
%A R. L. Blumenthal
%T Describe - An Explanation Facility for an Object Based System
%I Carnegie Mellon Computer Science Department
%D DEC 1985
%K AI01 O01
%A Kai-Fu Lee
%T Incremental Network Generation in Template-Based Word Recognition
%I Carnegie Mellon Computer Science Department
%D DEC 1985
%K AI05
%A J. Quinlan
%T A Comparative Analysis of Computer Architectures for Production
System Machines
%I Carnegie Mellon Computer Science Department
%D MAY 1985
%K AI01 H03 OPS5
%A M. Boggs
%A J. Carbonell
%A M. Kee
%A I. Monarch
%T Dypar-I: Tutorial and Reference Manual
%I Carnegie Mellon Computer Science Department
%D DEC 1985
%K AI01 AI02 Franz Lisp
%A Paola Giannini
%T Type Checking and Type Deduction Techniques for Polymorphic Programming
Languages
%I Carnegie Mellon Computer Science Department
%D DEC 1985
%K O02 lambda-calculus let construct
%A M. Dyer
%A M. Flowers
%A S. Muchnick
%T Lisp/85 User's Manual
%I University of Kansas, Computer Science Department
%R 77-4
%K T01
%A M. Flowers
%A M. DYer
%A S. Muchnick
%T LISP/85 Implementation Report
%I University of Kansas, Computer Science Department
%R 78-1
%K T01
%A N. Jones
%A S. Muchnick
%T Flow Analysis and Optimization of LISP-like Structures
%I University of Kansas, Computer Science Department
%R 78-2
%K T01
%A U. Pleban
%T The Standard Semantics of a Subset of SCHEME, A Dialect of LISP
%I University of Kansas, Computer Science Department
%R 79-3
%K T01 O02
%A S. Muchnick
%A U. Pleban
%T A Semantic Comparison of LISP and SCHEME
%I University of Kansas, Computer Science Department
%R 80-3
%K T01 O02
%A M. Jones
%T The PEGO Acquisition System Implementaiton Report
%I University of Kansas, Computer Science Department
%R 80-4
%A Gary Borchardt
%A Z. Bavel
%T CLIP, Computer Language for Idea Processing
%I University of Kansas, Computer Science Department
%R 81-4
%A Marek Holynski
%A Brian R. Gardner
%A Rafail Ostrovsky
%T Toward an Intelligent Computer Graphics System
%I Boston University, Computer Science Department
%R BUCS Tech Report #86-003
%D JAN 1986
%K T01 AA16
%A Joyce Friedman
%A Carol Neidle
%T Phonological Analysis for French Dictation: Preliminaries to an Intelligent
Tutoring System
%I Boston University, Computer Science Department
%R BUCS Tech Report #86-004
%D APR 1986
%K AI02 AA07
%A Pawel Urzyczyn
%T Logics of Programs with Boolean Memory
%I Boston University, Computer Science Department
%R BUCS Tech Report #86-006
%D APR 1986
%K AI16
%A Chua-Huang
%A Christian Lengauer
%T The Derivation of Systolic Implementatons of Programs
%R TR-86-10
%I Department of Computer Sciences, University of Texas at Austin
%D APR 1986
%K AA08 AA04 H03 H02
%A E. Allen Emerson
%A Chin-Laung Lei
%T Model Checking in the Propositional Mu-Calculus
%R TR-86-06
%I Department of Computer Sciences, University of Texas at Austin
%D FEB 1986
%K O02 AA08
%A R. D. Lins
%T On the Efficiency of Categorical Combinators as a Rewriting System
%D NOV 1985
%R No 34
%I University of Kent at Canterbury, Computing Laboratory
%K AI11 AI14
%A R. D. Lints
%T A Graph Reduction Machine for Execution of Categorical Combinators
%D NOV 1985
%R No 36
%I University of Kent at Canterbury, Computing Laboratory
%A S. J. Thompson
%T Proving Properties of Functions Defined on Lawful Types
%D MAY 1986
%R No 37
%I University of Kent at Canterbury, Computing Laboratory
%K AA08 AI11
%A V. A. Saraswat
%T Problems with Concurrent Prolog
%D JAN 1986
%I Carnegie Mellon University, Department of Computer Science
%K T02 H03
%A K. Shikano
%T Text-Independent Speaker Recognition Expertiments using Codebooks in Vector
quantization
%D JAN 1986
%I Carnegie Mellon University
%K AI05
%A S. Nakagawa
%T Speaker Independent Phoneme Recognition in Continuous Speech by
a Statistical Method and a Stochastic Dynamic Time Warping Method
%D JAN 1986
%I Carnegie Mellon University
%K AI05
%A F. Hau
%T Two Designs of Functional Units for VLSI Based Chess Machines
%D JAN 1986
%I Carnegie Mellon University
%K AA17 H03
%X Brute force chess automata searching 8 piles (4 full moves) or deeper have
been dominating the computer Chess scene in recent years and have reached
master level performance. One intereting question is whether 3 or 4 additional
piles couples with an improved evaluation scheme will bring forth world
championship level performance. Assuming an optimistic branching ratio of 5,
speedup of at least one hundred fold over the best current chess automaton
would be necessary to reach the 11 or 12 piles per move range.
%A Y. Iwasaki
%A H. A. Simon
%T Theories of Causual Ordering: Reply to de Kleer and Brown
%D FEB 1986
%I Carnegie Mellon University
%K Causality in Device Behavior AA04
%A H. Saito
%A M. Tomita
%T On Automatic Composition of Stereotypic Documents in Foreign Languages
%D DEC 1985
%I Carnegie Mellon University
%K AI02
%A T. Imielinski
%T Query Processing in Deductive Databases with Incomplete Information
%R DCS-TR-177
%I Rutgers University, Laboratory for Computer Science Research
%K AA09 AI10 Horn Clauses Skolem functions
%A T. Imielinski
%T Abstraction in Query Processing
%R DCS-TR-178
%I Rutgers University, Laboratory for Computer Science Research
%K AA09 AI11
%A T. Imielinski
%T Results on Translating Defaults to Circumscription
%R DCS-TR-179
%I Rutgers University, Laboratory for Computer Science Research
%K AA09
%A T. Imielinski
%T Transforming Logical Rules by Relational Algebra
%R DCS-TR-180
%I Rutgers University, Laboratory for Computer Science Research
%K AA09 AI10 Horn clauses
%A T. Imeielinski
%T Automated Deduction in Databases with Incomplete Information
%R DCS-TR-181
%I Rutgers University, Laboratory for Computer Science Research
%K AA09
%A B. A. Nadel
%T Representationi-Selection for Constraint Satisfaction Problems: A Case
Study Using n-queens
%R DCS-TR-182
%I Rutgers University, Laboratory for Computer Science Research
%K AI03 AA17
%A B. A. Nadel
%T Theory-Based Search Order Selection for Constraint Satisfaction
Problems
%R DCS-TR-183
%I Rutgers University, Laboratory for Computer Science Research
%K AI03
%A C. V. Srinivasan
%T Problems, Challenges and Opportunities in Naval Operational Planning
%R DCS-TR-187
%I Rutgers University, Laboratory for Computer Science Research
%K AI09 AA18
%A M. A. Bienkowski
%T An Example of Structured Explanation Generation
%I Princeton University Computer ScienceDepartment
%D NOV 1985
%K O01
%A Bruce G. Buchanan
%T Some Approaches to Knowledge Acquisition
%I Stanford University Computer Science Department
%R STAN-CS-85-1076
%D JUL 1985
%$ $5.00
%K AI16
%A John McCarthy
%T Applications of Circumscription to Formalizing Common Sense Knowledge
%I Stanford University Computer Science Department
%R STAN-CS-85-1077
%D SEP 1985
%$ $5.00
%K AI15
%A Stuart Russell, Esq.
%T The Compleat Guide to MRS
%I Stanford University Computer Science Department
%R STAN-CS-85-1080
%D JUN 1985
%$ $15.00
%K AI16
%A Jeffrey S. Rosenschein
%T Rational Interaction: Cooperation among Intelligent Agents
%I Stanford University Computer Science Department
%R STAN-CS-85-1081
%D OCT 1985
%$ $15.00
%K AI16
%A Allen Van Gelder
%T A Message Passing Framework for Logical Query Evaluation
%I Stanford University Computer Science Department
%R STAN-CS-85-1088
%D DEC 1985
%$ $5.00
%K AI10 Horn Clauses relational data bases H03 AA09 acyclic database schemas
%A Jeffrey D. Ullman
%A Allen Van Gelder
%T Parallel Complexity of Logical Query Programs
%I Stanford University Computer Science Department
%R STAN-CS-85-1089
%D DEC 1985
%$ $5.00
%K AI10 H03 AA09
%A Kaizhi Yue
%T Constructing and Analyzing Specifications of Real World Systems
%I Stanford University Computer Science Department
%R STAN-CS-86-1090
%D SEP 1985
%K AI01 AA08
%X available in microfilm only
%A Li-Min Fu
%T Learning Object-Level and Metal-Level Knowledge in Expert Systems
%I Stanford University Computer Science Department
%R STAN-CS-86-1091
%D NOV 1985
%$ $15.00
%K jaundice AI04 AI01 AA01 condenser
%A Devika Subramanian
%A Bruce G. Buchanan
%T A General Reading List for Artificial Intelligence
%I Stanford University Computer Science Department
%R STAN-CS-86-1093
%D DEC 1985
%$ 10.00
%K AT21
%X bibliography for students studying for AI qualifying exam at Stanford
%A Bruce G. Buchanan
%T Expert Systems: Working Systems and the Research Literature
%I Stanford University Computer Science Department
%R STAN-CS-86-1094
%D DEC 1985
%$ 10.00
%K AT21 AI01
%A Jiawei Han
%T Pattern-Based and Knowledge-Directed Query Compilation for Recursive Data
Bases
%I The University of Wisconsin-Madison Computer Sciences Department
%R TR 629
%D JAN 1986
%$ 5.70
%K AA09 AI01 AI09
%X Abstract: Expert database systems (EDS's) comprise an interesting class of
computer systems which represent a confluence of research in artificial
intelligence, logic, and database management systems. They involve
knowledge-directed processing of large volumes of shared information and
constitute a new generation of knowledge management systems.
Our research is on the deductive augmentation of relational database
systems, especially on the efficient realization of recursion. We study
the compilation and processing of recursive rules in relational database
systems, investigating two related approaches: pattern-based recursive rule
compilation and knowledge-directed recursive rule compilation and planning.
Pattern-based recursive rule compilation is a method of compiling and processing
recursive rules based on their recursion patterns. We classify recursive rules
according to their processing complexity and develop three kinds of algorithms
for compiling and processing different classes of recursive rules: transitive
closure algorithms, SLSR wavefront algorithms, and stack-directed compilation
algorithms. These algorithms, though distinct, are closely related. The more
complex algorithms are generalizations of the simpler ones, and all apply the
heuristics of performing selection first and utilizing previous processing
results (wavefronts) in reducing query processing costs. The algorithms are
formally described and verified, and important aspects of their behavior are
analyzed and experimentally tested.
To further improve search efficiency, a knowledge-directed recursive rule
compilation and planning technique is introduced. We analyze the issues raised
for the compilation of recursive rules and propose to deal with them by
incorporating functional definitions, domain-specific knowledge, query
constants, and a planning technique. A prototype knowledge-directed relational
planner, RELPLAN, which maintains a high level user view and query interface,
has been designed and implemented, and experiments with the prototype are
reported and illustrated.
%A A. P. Anantharman
%A Sandip Dasgupta
%A Tarak S. goradia
%A Prasanna Kaikini
%A Chun-Pui Ng
%A Murali Subbarao
%A G. A. Venkatesh
%A Sudhanshu Verma
%A Kumar A. Vora
%T Experience with Crystal, Charlotte and Lynx
%I The University of Wisconsin-Madison Computer Sciences Department
%R TR 630
%D FEB 1986
%K H03 T02 Waltz constraint-propagation
%X Abstract: This paper describes the most recent implementations of
distributed algorithms at Wisconsin that use the Crystal multicomputer, the
Charlotte operating system, and the Lynx language. This environment is an
experimental testbed for design of such algorithms. Our report is meant to
show the range of applications that we have found reasonable in such an
environment and to give some of the flavor of the algorithms that have been
developed. We do not claim that the algorithms are the best possible for
these problems, although they have been designed with some care. In
several cases they are completely new or represent significant
modifications of existing algorithms. We present distributed
implementations of B trees, systolic arrays, prolog tree search, the
travelling salesman problem, incremental spanning trees, nearest-neighbor
search in k-d trees, and the Waltz constraint-propagation algorithm. Our
conclusion is that the environment, although only recently available, is
already a valuable resource and will continue to grow in importance in
developing new algorithms.
%A William J, Rapaport
%T SNePS Considered as a Fully Intensional Propositional
Semantic Network
%R TR 85-15
%I Univ. at Buffalo (SUNY), Dept. of Computer Science
%D October 1985
%K Semantic Network Processing System, syntax, semantics,
intensional knowledge representation system, cognitive
modeling, database management, pattern recognition, expert
systems, belief revision, computational linguistics
aa01 ai09 ai16
%O 46 pages
%X Price: $1.00 North America, $1.50 Other
%A William J. Rapaport
%T Logic and Artificial Intelligence
%R TR 85-16
%I University at Buffalo (SUNY), Dept. of Computer Science
%D November 1985
%K logic, propositional logic, predicate logic, belief systems AA16
%O 44 pages
%X Price: $1.00 North America, $1.50 Other
%A William J. Rapaport
%T Review of "Ethical Issues in the Use of Computers"
%R TR 85-17
%I University at Buffalo, Dept. of Computer Science
%D November 1985
%K computer ethics O06
%O 6 pages
%X Price: $1.00 North America, $1.50 Other
%A Radmilo M. Bozinovic
%T Recognition of Off-line Cursive Handwriting:
a Case of Multi-level Machine Perception
%I Univ. at Buffalo (SUNY), Dept. of Computer Science
%D March 1985
%R TR 85-01
%K Cursive script recognition, artificial intelligence,
computer vision, language perception, language understanding
%O 150 pages
%X Price: $2.00 North America, $3.00 other
%A R. Hookway
%T Verification of Abstract Types Whose Representation Share Storage
%D April 1980
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-80-02
%K AA09
%$ $2.00
%A G. Ernst
%A J. K. Vavlakha
%A W. F. Ogden
%T Verification of Programs with Procedure-Type Parameters
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-80-11
%D 1980
%K AA09
%$ $2.00
%A G. Ernst
%A F. T. Bradshaw
%A R. J. Hookway
%T A Note on Specifications of Concurrent Processes
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-81-01
%D FEB 1981
%K AA09
%$ $2.00
%A J. Franco
%T The Probabilistic Analysis of the Pure Literal Heuristic in Theorem
Proving
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-81-04
%D 1981
%K AI03 AI11
%$ $2.00
%A E. J. Branagan
%T An Interactive Theorem Prover Verification
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-81-09
%D AUG 1981
%K AI11
%$ $2.00
%A G. W. Ernst
%T A Method for verifying Concurrent Processes
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-82-01
%D FEB 1982
%K AA09
%$ $2.00
%A Chang-Sheng Yang
%T A Computer Intelligent System for Understanding Chinese Homonyms
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-83-10
%D AUG 1983
%K AI02
%$ $2.00
%A G. Ernst
%T Extensions to Methods for Learning Problem Solving Strategies
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-84-02
%D MAY 1984
%K AI04
%$ $2.00
%A R. J. Hookway
%T Analysis of Asynchronous Circuits Using Temporal Logic
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-84-07
%D JUL 1984
%K AA04
%$ $2.00
%A Sterling, Leon
%T Explaining Explanations Clearly
%I Case Western Reserve University, Computer Engineering and Science Department
%R CES-85-03
%D MAY 1985
%K O01
%$ $2.00
------------------------------
End of AIList Digest
********************
∂06-Oct-86 0020 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #203
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 6 Oct 86 00:20:44 PDT
Date: Sun 5 Oct 1986 21:43-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #203
To: AIList@SRI-STRIPE
AIList Digest Monday, 6 Oct 1986 Volume 4 : Issue 203
Today's Topics:
Query - Prolog Chemistry Systems & RuleMaster & AI Graduate Programs &
Expert Systems and Deep Knowledge & Textbook for ES Applications &
Communications Expert Systems & Generic Expert System &
Integrated Inference Machines & Byte Prolog & Digitalk Smalltalk,
AI Tools - Digitalk Smalltalk & Line Expert & XLISP &OPS5,
Vision - Face Recognition,
Logic Programming - TMS Loops
----------------------------------------------------------------------
Date: Sun, 28 Sep 86 10:46:15 -0200
From: Jacob Levy <jaakov%wisdom.bitnet@WISCVM.WISC.EDU>
Subject: Chemistry systems & PROLOG
Has anyone programmed or used a logic programming based system for
use in Chemistry? I am especially interested in organic synthesis planning
systems. Do you know of such systems written in other languages? Any help,
references and info will be greatly appreciated,
Thanks
Rusty Red (AKA Jacob Levy)
BITNET: jaakov@wisdom
ARPA: jaakov%wisdom.bitnet@wiscvm.ARPA
CSNET: jaakov%wisdom.bitnet@csnet-relay
UUCP: jaakov@wisdom.uucp
------------------------------
Date: Sat, 27 Sep 86 09:15:20 cdt
From: Esmail Bonakdarian <bonak%cs.uiowa.edu@CSNET-RELAY.ARPA>
Subject: RuleMaster
Anybody out there have any comments about RuleMaster? RuleMaster
(a product of the Radian Corporation) is a software tool for
supporting the development of expert systems. I would be grateful
for any information, comments from people who have used this package
(especially on a DOS machine) etc.
If there is enought interest I will collect and post all of the
responses back to AIlist.
Thanks,
Esmail
------------------------------
Date: 29 Sep 86 00:29:16 GMT
From: gatech!gitpyr!krubin@seismo.css.gov (Kenny Rubin)
Subject: Differences among Grad AI programs
The following is a request for information about the
differences among the various universities that offer graduate
degrees in AI. I apologize in advance if this topic has received
prior discussion, I have been out the country for a few months
and did not have access to the net.
The goal of all this is to compile a current profile of
the graduate AI programs at the different universities. Thus, any
information about the different programs such as particular strengths
and weaknesses would be useful. Also, a comparison and/or conclusions
drawn between the various programs would be helpful.
I am essentially interested in the areas of AI that each
university performs research in. For example research pertaining
to Knowledge Representation, Natural Language Processing, Expert
System Development, Learning, Robotics, etc...
Basically anything that you think potential applicants to
the various universities would like to know, would be helpful. Feel
free to comment about the university(ies) that you know best:
- MIT, CMU, Yale, Standford, UC Berkeley, UCLA, etc...
Please send all response by E-mail to me to reduce net traffic.
If there is sufficient interest, I will post a compiled summary.
Kenneth S. Rubin (404) 894-2348
Center for Man-Machine Systems Research
School of Industrial and Systems Engineering
Georgia Institute of Technology
Post Office Box 35826
Atlanta, Georgia 30332
Majoring with: School of Information and Computer Science
...!{akgua,allegra,amd,hplabs,ihnp4,seismo,ut-ngp}!gatech!gitpyr!krubin
------------------------------
Date: 29 Sep 86 18:43:08 GMT
From: mcvax!kvvax4!rolfs@seismo.css.gov (Rolf Skatteboe)
Subject: Expert systems and deep knowledge
Hello:
For the time being, I'm working on my MSc thesis with main goal to
investigate the combination of knowledge based diagnosis system and
the use of mathematical models of gas turbines. I will used this models as
deep knowledge in order to improve the results of the diagnosis system.
The models can be used both as early warning fault systems and as sensor
verification and test. The model can also be used to evaluate changes in
machine parameters caused by engine degradiation.
So far I have found some articles about diagnostic reasoning based on structure
and behavior for digital electronic hardware.
While I'm trying to find the best system structure for a demonstration system,
I would like to get hold on information (articles references, program
examples, and other people's experiences) both on using deep knowledge
in expert systems in general, and the use of mathematical models in
particular.
I hope that someone can help me.
Grethe Tangen
Kongsberg KVATRO, NORWAY
------------------------------
Date: 3 Oct 1986 0904-EDT
From: Holger Sommer <SOMMER@C.CS.CMU.EDU>
Subject: Expert system Textbook For Applications
I was asked to develop a course for Undergrad seniors and Beginning
Graduated Students in Engineering, an introductory course for Expert
System Technology with the focus on Application. I am looking for a
suitable introductory textbook at the beginners level which could help
me to get the students familiar with AI in general and expert systems
specifically. Also if anyone has some experiance teachning a course for
non-computer science students in the AI area I would appreciate our
comments. Please send mail to Sommer@c.cs.cmu.edu
------------------------------
Date: Mon, 29 Sep 86 10:42:27 edt
From: Lisa Meyer <lem%galbp.uucp@CSNET-RELAY.ARPA>
Subject: Request for Info on Expert Systems Development
I am a senior Info & Computer Science major at Georgia Tech. I will be
constructing an Expert System to diagnose communications setups & their
problems for my senior design project at the request of my cooperative
ed. employer. I have only had an introductory course in AI, so a large
part of this project will be spent on researching information on expert
system development.
Any information on : Constructing Expert Systems (esp. for diagnostics)
PC versions of Languages suitable for building
Expert Systems
Public Domain Expert Systems, ES Shells, or de-
velopment tools
Or good books, articles, or references to the
subjects listed above
WOULD BE GREATLY APPRECIATED. As the goal of my project is to con-
struct a working diagnostic expert system and not to learn every-
thing there is to know about AI, pointers to good sources of
infromation, copies of applicable source, and information those
who ARE knowledgable in the field of AI and Expert System Con-
struction would be EXTREMELY HELPFUL.
THANKS IN ADVANCE,
Lisa Meyer (404-329-8022)
Atlanta, GA
=====================================================================
Lisa Meyer
Harris / Lanier
Computer R&D (Cooperative Education Program)
Information & Computer Science Major
Georgia Institute of Technology Georgia I
Ga. Tech Box 30750, Atlanta Ga. 30332
{akgua,akgub,gatech}!galbp!lem
=====================================================================
------------------------------
Date: 30 Sep 86 13:34:16 GMT
From: lrl%psuvm.bitnet@ucbvax.Berkeley.EDU
Subject: Expert System Wanted
Does anyone know of a general purpose expert system available for VM/CMS?
I'm looking for one that would be used on a university campus by a variety
of researchers in different disciplines. Each researcher would feed their
own rules into it.
Also, can anyone recommend readings, conferences, etc. for someone getting
started in this field?
Thanks.
Linda Littleton phone: (814) 863-0422
214 Computer Building bitnet: LRL at PSUVM
Pennsylvania State University
University Park, PA 16802
------------------------------
Date: Tue, 30 Sep 86 0:19:04 CDT
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Address???
In a recent summary of the Spang-Robinson Report reference was made to the
company "Integrated Inference Machines". Does anyone have an address for them?
------------------------------
Date: 2 Oct 86 18:21:29 GMT
From: john@unix.macc.wisc.edu (John Jacobsen)
Subject: pd prolog
!!!!!!!!!!!!!!
Does anyone have the public domain prolog package discussed in this month's
BYTE magazine?
John E. Jacobsen
University of Wisconsin -- Madison Academic Computing Center
------------------------------
Date: 4 Oct 86 00:32:11 GMT
From: humu!uhmanoa!todd@bass.nosc.mil (Todd Ogasawara)
Subject: Digitalk Smalltalk for the PC
If anyone out there has played with the version of Smalltalk for the
PC by Digitalk, I'd like to get your opinions. I am especially
interested in the object-oriented version of Prolog that comes with
the package. Thanks..todd
Todd Ogasawara, University of Hawaii
Dept. of Psychology & U. of Hawaii Computing Center
UUCP: {ihnp4,dual,vortex}!islenet!
\
\←← uhmanoa!todd
/
{backbone}!sdcsvax!noscvax!humu!/
/
clyde/
[soon to change to uhccux!todd]
ARPA: humu!uhmanoa!todd@noscvax
** I used to be: ogasawar@nosc.ARPA & ogasawar@noscvax.UUCP
------------------------------
Date: 4 Oct 86 23:56:43 GMT
From: spdcc!dyer@harvard.harvard.edu (Steve Dyer)
Subject: Re: Digitalk Smalltalk for the PC
I have it and am very impressed. Perhaps more convincing though, I have
a friend who's been intimately involved with Smalltalk development
from the very beginning who was also very impressed. It's even more
remarkable because the Digitalk folks didn't license the Smalltalk-80
virtual machine from Xerox; they developed their system from the formal
and not-so-formal specifications of Smalltalk 80 available in the public
domain. Apparently, they can call their system "Smalltalk V" because
"Smalltalk" isn't a trademark of Xerox; only "Smalltalk-80" is.
I haven't played with their Prolog system written in Smalltalk.
--
Steve Dyer
dyer@harvard.HARVARD.EDU
{linus,wanginst,bbnccv,harvard,ima,ihnp4}!spdcc!dyer
------------------------------
Date: 3 Oct 1986 13:30:37 EDT
From: David Smith <DAVSMITH@A.ISI.EDU>
Subject: Communications Experts
I was curious to find out about Werner Uhrig's question (9/10) relating
to an Infoworld article from Smyrna, Ga since Ga is not exactly a
hotbed of AI activity. I spoke to Nat Atwell of Concept
Development Systems about Line Expert ($49.95). It is apparently
an off-line Turbo Prolog application with knowledge about data
set interfacing, known problems etc., including the ability to draw
schematics of cables on the screen for you. For more info,
call nat at (404) 434-4813.
------------------------------
Date: Fri, 26 Sep 86 11:06 PDT
From: JREECE%sc.intel.com@CSNET-RELAY.ARPA
Subject: XLISP Availability
Although XLISP is available on a number of PC bulletin boards, your best bet
for the PC version would be the BIX network run by Byte magazine. It has its
own forum run by the author, David Betz, and you can turn around a message
to him in 1-2 days. Information on how to sign up has been in most of the
recent issues of Byte. Also, the latest version is 1.7, and there is talk
of a compiler coming out in the future.
John Reece
Intel
------------------------------
Date: Mon, 29 Sep 86 0:30:53 BST
From: Fitch@Cs.Ucl.AC.UK
Subject: OPS5 on small machines (re V4 #183)
There is OPS5 for the IBM-PC running UOLISP, from North West Computer
Algorithms. It is the franz version slightly modified.
I have run OPS5 on an Atari and on an Amiga. It does not need a very big
system to do some things.
==John Fitch
------------------------------
Date: Mon 29 Sep 86 15:40:24-CDT
From: Charles Petrie <AI.PETRIE@MCC.COM>
Reply-to: Petrie@MCC
Subject: TMS Query Response
More detail on Don Rose's TMS query:
Does anyone know whether the standard algorithms for belief revision
(e.g. dependency-directed backtracking in TMS-like systems) are
guaranteed to halt? That is, is it possible for certain belief networks
to be arranged such that no set of mutually consistent beliefs can be found
(without outside influence)?
There are at least three distinct Doyle-style algorithms. Doyle's doesn't
terminate on unsatisfiable cicularities. James Goodwin's algorithm
does. This algorithm is proved correct in "An Improved Algorithm for
Non-monotonic Dependency Net Update", LITH-MAT-R-82-23, Linkoping
Institute of Technology. David Russinoff's algorithm not only halts
given an unsatisfiable circularity, but is guaranteed to find a
well-founded, consistent set of status assignments, even if there are
odd loops, if such a set is possible. There are dependency nets for
which Russinoff's algorithm will properly assign statuses and Goodwin's
may not. An example and proof of correctness for this algorithm is
given in "An Algorithm for Truth Maintenance", AI-068-85,
Microelectronics and Computer Technology Corporation. Also, Doyle made
the claim that an unsatisfiable circularity can be detected if a node is
its own ancestor after finding a valid justification with a NIL status
in the Outlist. Detection of unsatisfiable circularities turns out to be
more difficult than this. This is noted in "A Diffusing Computation for
Truth Maintenance" wherein I give a distributed computation for status
assignment (published in the Proc. 1986 Internat. Conf. on Parallel
Processing, IEEE) that halts on unsatisfiable circularities.
The term "unsatisfiable circularity" was introduced by Doyle and refers
to a dependency network that has no correct status labeling. The term
"odd loop" was introduced by Charniak, Riesbeck, and McDermott in
section 16.7 of "Artificial Intelligence Programming". An equivalent
definition is given by Goodwin. In both, an odd loop refers to a
particular circular path in a dependency net. As Goodwin notes, such
odd loops are a necessary, but not sufficient, condition for an unsatisfiable
circularity.
All of the algorithms mentioned above are for finding a proper set of
status assignments for a dependency network. A distinct issue is the
avoidance of the creation of odd loops, which may introduce
unsatisfiable circularities, by Doyle-style dependency-directed
backtracking. Examples of creation of such odd loops and algorithms to
avoid such are described in my technical reports on DDB. Michael
Reinfrank's report on the KAPRI system also notes the possibility of
odd loops created by DDB. (DDB references on request to avoid an even
longer note.)
Charles Petrie
PETRIE@MCC
------------------------------
Date: 3 Oct 1986 13:36:55 EDT
From: David Smith <DAVSMITH@A.ISI.EDU>
Subject: Computer Vision
Peter Snilovicz recently asked about recognizing faces. I saw a really
interesting presentation on the subject Cortical Thought Theory by Rick Routh,
ex-AFIT now with the Army at Fort Gordon. He can be reached at (404)791-3011.
------------------------------
End of AIList Digest
********************
∂06-Oct-86 0210 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #204
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 6 Oct 86 02:10:03 PDT
Date: Sun 5 Oct 1986 22:18-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #204
To: AIList@SRI-STRIPE
AIList Digest Monday, 6 Oct 1986 Volume 4 : Issue 204
Today's Topics:
Seminars - Connectionist Networks (UPenn) &
Automatic Class Formation (SRI) &
Computers are not Omnipotent (CMU) &
Automating Diagnosis (CMU) &
Temporal Logic (MIT) &
Program Transformations and Parallel Lisp (SU) &
Temporal Theorem Proving (SU) &
Efficient Unification of Quantified Terms (MIT) &
Planning Simultaneous Actions (BBN) &
Cognitive Architecture (UPenn)
----------------------------------------------------------------------
Date: Mon, 29 Sep 86 14:52 EDT
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Connectionist Networks (UPenn)
CONNECTIONIST NETWORKS
Jerome A. Feldman
Computer Science Department
University of Rochester
There is a growing interest in highly interconnected networks of very simple
processing elements within artificial intelligence circles. These networks are
referred to as Connectionist Networks and are playing an increasingly important
role in artificial intelligence and cognitive science. This talk briefly
discusses the motivation behind pursuing the the connectionist approach, and
discusses a connectionist model of how mammals are able to deal with visual
objects and environments. The problems addressed include perceptual
constancies, eye movements and the stable visual world, object descriptions,
perceptual generalizations, and the representation of extrapersonal space.
The development is based on an action-oriented notion of perception. The
observer is assumed to be continuously sampling the ambient light for
information of current value. The central problem of vision is taken to be
categorizing and locating objects in the environment. The critical step in
this process is the linking of visual information to symbolic object
descriptions, i.e., indexing. The treatment focuses on the different
representations of information used in the visual system. The model employs
four representation frames that capture information in the following forms:
retinotopic, head-based, symbolic, and allocentric.
The talk ends with a discussion of how connectionist models are being realized
on existing architectures such as large multiprocessors.
Thursday, October 2, 1986
Room 216 - Moore School
3:00 - 4:30 p.m.
Refreshments Available
Faculty Lounge - 2:00 - 3:00 p.m.
------------------------------
Date: Wed 1 Oct 86 11:46:40-PDT
From: Amy Lansky <LANSKY@SRI-VENICE.ARPA>
Subject: Seminar - Automatic Class Formation (SRI)
PROBABILISTIC PREDICTION THROUGH AUTOMATIC CLASS FORMATION
Peter Cheeseman (CHEESEMAN@AMES-PLUTO)
NASA Ames Research Center
11:00 AM, MONDAY, October 6
SRI International, Building E, Room EJ228
A probabilistic expert system is a set of probabilistic connections
(e.g. conditional or joint probabilities) between the known variables.
These connections can be used to make (conditional) probabilistic
predictions for variables of interest given any combination of known
variable values. Such systems suffer a major computational problem---
once the probabilstic connections form a complex inter-connected
network, the cost of performing the necessary probability calculations
becomes excessive. One approach to reducing the computational
complexity is to introduce new "variables" (hidden causes or dummy
nodes) that decouple the interactions between the variables. Judea
Pearl has described an algorithm for introducing sufficient dummy
nodes to create a tree structure, provided the probabilistic
connections satisfy certain (strong) restrictions. This talk will
describe a procedure for finding only the significant "hidden causes",
that not only lead to a computationally simple procedure, but subsume
all the significant interactions between the variables.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
------------------------------
Date: 28 Sep 1986 1228-EDT
From: David A. Evans <DAE@C.CS.CMU.EDU>
Subject: Seminar - Computers are not Omnipotent (CMU)
PHILOSOPHY COLLOQUIUM ANNOUNCEMENT:
COMPUTERS ARE NOT OMNIPOTENT
David Harel
Weizmann Institute
and
Carnegie Mellon University
Monday, October 6 4:00 p.m.
Porter Hall 223D
In April, 1984, TIME magazine quoted a computer professional as saying:
"Put the right kind of sofware into a computer and it will do
whatever you want it to. There may be limits on what you can
do with the machines themselves, but there are no limits on
what you can do with the software."
In the talk we shall disprove this contention outright, by exhibiting a
wide array of results obtained by mathematicians and computer scientists
between 1935 and 1983. Since the results point to inherent limitations of
any kind of computing device, even with unlimited resources, they appear
to have interesting philosophical implications concerning our own
limitations as entities with finite mass.
------------------------------
Date: 29 September 1986 2247-EDT
From: Masaru Tomita@A.CS.CMU.EDU
Subject: Seminar - Automating Diagnosis (CMU)
Date: 10/7 (Tuesday)
Time: 3:30
Place: WeH 5409
Some AI Applications at Digital
Automating Diagnosis: A case study
Neil Pundit
Kamesh Ramakrishna
Artificial Intelligence Applications Group
Digital Equipment Corporation
77 Reed Road (HLO2-3/M10)
Hudson, Massachusetts, 01749
The Artificial Intelligence Applications Group at Digital is engaged in the
development of expert systems technology in the context of many real-life
problems drawn from within the corporation and those of customers. In
addition, the group fosters basic research in AI by arrangements with
leading universities. We plan to briefly describe some interesting
applications. However, to satisfy your appetite for technical content, we
will describe in some detail our progress on Beta, a tool for automating
diagnosis.
The communication structure level is a knowledge level at which certain
kinds of diagnostic reasoning can occur. It is an intermediate level between
the level at which current expert systems are designed (using knowledge
acquired from experts) and the level at which ``deep reasoning'' systems
perform (based on knowledge of structure, function, and behavior of the
system being diagnosed). We present an example of an expert system that was
designed the old-fashioned way and the heuristics that were the basis for
recognizing the existence of the communication structure level.
Beta is a language for specifying the communication structure of a system so
that these heuristics can be compiled into a partially automatically
generated program for diagnosing system problems. The current version of
Beta can handle a specific class of communication structure that we call a
``control hierarchy'' and can analyze historical usage and error data
maintained as a log file. The compiler embeds the heuristics in a generated
mix of OPS5 and C code. We believe that Beta is a better way for designers
and programmers who are not AI experts to express their knowledge of a
system than the current rule-based or frame-based formalisms.
------------------------------
Date: Thu, 2 Oct 86 15:12:49 EDT
From: "Elisha P. Sacks" <elisha%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU>
Subject: Seminar - Temporal Logic (MIT)
E. Taatnoon
"The Third Cybernetics and Temporal Logic"
I aim to link up the concepts of system bifurcation and system
catastrophe with temporal logic in order to show the applicability of
dialectical reasoning to metamorphic system transformations. A system
catastrophe is an innovation resulting from reorganization resulting
from a switch from positive to negative feedback or vice versa. The
subsystems would then be oscillators and the truth of any descriptive
statement is then distributive. Such oscillations would produce an
uncertainty in the temporal trajectory of the system which would
increase both towards the past and the future. This means that time
is not a scalar dimension, but a quadratic paraboloid distribution of
converging and diverging transition probabilities. A social system
composed of such oscillators would be heterarchical rather than
hierarchical.
Refreshments.
Hosts: Dennis Fogg and Boaz Ben-Zvi
Place: 8th Floor Playroom
Time: Noon
------------------------------
Date: 30 Sep 86 0947 PDT
From: Carolyn Talcott <CLT@SAIL.STANFORD.EDU>
Subject: Seminar - Program Transformations and Parallel Lisp (SU)
Speaker: James M. Boyle, Argonne National Laboratory
Time: Monday, October 6, 4pm
Place: 252 Margaret Jacks (Stanford Computer Science Dept)
Deriving Parallel Programs
from Pure LISP Specifications
by Program Transformation
Dr. James M. Boyle
Mathematics and Computer Science Division
Argonne National Laboratory
Argonne, IL 60439-4844
boyle@anl-mcs.arpa
How can one implement a "dusty deck" pure Lisp program on global-
memory parallel computers? Fortunately, pure Lisp programs have a declara-
tive interpretation, which protects their decks from becoming too dusty!
This declarative interpretation means that a pure Lisp program is not
over-specified in the direction of sequential execution. Thus there is
hope to detect parallelism automatically in pure Lisp programs.
In this talk I shall describe a stepwise refinement of pure Lisp pro-
grams that leads to a parallel implementation. From this point of view,
the pure Lisp program is an abstract specification, which program transfor-
mations can refine in several steps to a parallel program. I shall
describe some of the transformations--correctness preserving rewrite rules
--used to carry out the implementation.
An important property of a parallel program is whether it can
deadlock. I shall discuss a number of the design decisions involved in the
refinement and their role in preserving the correctness of the transformed
program, especially with regard to deadlock.
Implementing a transformational refinement often leads to interesting
insights about programming. I shall discuss some of these insights,
including one about the compilation of recursive programs, and some that
suggest ways to systematically relax the "purity" requirement on the Lisp
program being implemented.
We have used this approach to implement a moderately large pure Lisp
program (1300 lines, 42 functions) on several parallel machines, including
the Denelcor HEP (r.i.p.), the Encore Multimax, the Sequent Balance 8000,
and the Alliant FX/8. I shall discuss some measurements of the performance
of this program, which has achieved a speedup of 12.5 for 16 processors on
realistic data, and some of the optimizations used to obtain this perfor-
mance.
Oh, yes, and by the way, the transformations produce a parallel pro-
gram in FORTRAN!
------------------------------
Date: 01 Oct 86 1134 PDT
From: Martin Abadi <MA@SAIL.STANFORD.EDU>
Subject: Seminar - Temporal Theorem Proving (SU)
PhD Oral Examination
Wednesday, October 8, 2:15 PM
Margaret Jacks Hall 146
Temporal Theorem Proving
Martin Abadi
Computer Science Department
In the last few years, temporal logic has been applied in the
specification, verification, and synthesis of concurrent programs, as
well as in the synthesis of robot plans and in the verification of
hardware devices. Nevertheless, proof techniques for temporal logic
have been quite limited up to now.
This talk presents a novel proof system R for temporal logic. Proofs are
generally short and natural. The system is based on nonclausal resolution,
an attractive classical logic method, and involves a special treatment of
quantifiers and modal operators.
Unfortunately, no effective proof system for temporal logic is
complete. We examine soundness and completeness issues for R and other
systems. For example, a simple extension of our resolution system is
as powerful as Peano Arithmetic. (Fortunately, refreshments will
follow the talk.)
Like classical resolution, temporal resolution suggests an approach to
logic programming. We explore temporal logic as a programming language
and a temporal resolution theorem prover as an interpreter for
programs in this language.
Other modal logics have found a variety of uses in artificial
intelligence and in the analysis of distributed systems. We discuss
resolution systems analogous to R for the modal logics K, T, K4, S4,
S5, D, D4, and G.
------------------------------
Date: Sat, 4 Oct 86 12:30:41 EDT
From: "Steven A. Swernofsky" <SASW%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU>
Subject: Seminar - Efficient Unification of Quantified Terms (MIT)
From: Susan Hardy <SH at XX.LCS.MIT.EDU>
JOHN STAPLES
University of Queensland
Efficient unification of quantified terms
DATE: Tuesday, October 7, l986
TIME: 2:45 pm. - Refreshments
3:00 pm. - Talk
PLACE: 2nd Floor Lounge
Quantifiers such as for-all, integral signs, block headings would be
a valuable enrichment of the vocabulary of a logic programming language
or other computational logic. The basic technical prerequisite is a
suitable unification algorithm. A programme is sketched for the
development of data structures and algorithms which efficiently
support the use of quantified terms. Progress in carrying out this
programme is reviewed. Both structure sharing and non structure
sharing representations of quantified terms are described, together
with a unification algorithm for each case. The efficiency of the
approach results from the techniques used to represent terms,
which enable naive substitution to implement correct substitution
for quantified terms. The work is joint with Peter J. Robinson.
HOST: Arvind
------------------------------
Date: Sat, 4 Oct 86 13:04:33 EDT
From: "Steven A. Swernofsky" <SASW%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU>
Subject: Seminar - Planning Simultaneous Actions (BBN)
From: Brad Goodman <BGOODMAN at BBNG.ARPA>
BBN Laboratories
Science Development Program
AI/Education Seminar
Speaker: Professor James Allen
University of Rochester
(james@rochester)
Title: Planning Simultaneous Actions in Temporally Rich Worlds
Date: 10:30a.m., Monday, October 6th
Location: 3rd floor large conference room,
BBN Labs, 10 Moulton Street, Cambridge
This talk describes work done with Richard Pelavin over the last few
years. We have developed a formal logic of action that allows us to
represent knowledge and reason about the interactions between events
that occur simultaneously or overlap in time. This includes interactions
between two (or more) actions that a single agent might perform
simultaneously, as well as interactions between an agent's actions and
events occuring in the external world. The logic is built upon an
interval-based temporal logic extended with modal operators similar to
temporal necessity and a counterfactual operator. Using this formalism,
we can represent a wide range of possible ways in which actions may
interact.
------------------------------
Date: Thu, 2 Oct 86 11:34 EDT
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Cognitive Architecture (UPenn)
WHAT IS THE SHAPE OF THE COGNITIVE ARCHITECTURE?
Allen Newell
Computer Science Department
Carnegie Mellon University
12:00 noon, October 17
Alumni Hall, Towne Building
University of Pennsylvania
The architecture plays a critical role in computational systems, defining
the separation between structure and content, and hence the capability of
being programmed. All architectures have much in common. However, important
characteristics depend on which mechanisms occur in the architecture (rather
than in software) and what shape they take. There has been much research
recently on architectures throughout computer and cognitive science. Within
computer science the main drivers have been new hardware technologies (VLSI)
and the felt need for parallelism. Within cognitive science the main drivers
have been the hope of comprehensive psychological models (ACT*), the urge to
ground the architecture in neurophysiological mechanisms (the
connectionists) and the proposal of modularity as a general architectural
principle (from linguistics). The talk will be on human cognitive
architecture, but considerations will be brought to bear from everywhere.
------------------------------
End of AIList Digest
********************
∂06-Oct-86 0348 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #205
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 6 Oct 86 03:48:28 PDT
Date: Sun 5 Oct 1986 22:25-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #205
To: AIList@SRI-STRIPE
AIList Digest Monday, 6 Oct 1986 Volume 4 : Issue 205
Today's Topics:
Humor - AI Limericks by Henry Kautz,
AI Tools - Turbo Prolog & Reference Counts vs Garbage Collection,
Philosophy - Emergent Consciousness & Perception
----------------------------------------------------------------------
Date: Fri, 3 Oct 86 14:58:37 CDT
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: AI Limericks by Henry Kautz
gleaned from the pages of CANADIAN ARTIFICIAL INTELLIGENCE
September 1986 No. 9 page 6:
AI Limericks
by Henry Kautz
University of Rochester
*** ***
If you're dull as a napkin, don't sigh;
Make your name as a "deep" sort of guy.
You just have to crib, see
Any old book by Kripke
And publish in AAAI.
*** ***
A hacker who studied ontology
Was famed for his sense of frivolity.
When his program inferred
That Clyde is a bird
He blamed not his code but zoology.
*** ***
If your thesis is utterly vacuous
Use first-order predicate calculus.
With sufficient formality
The sheerist banality
Will be hailed by the critics: "Miraculous!"
If your thesis is quite indefensible
Reach for semantics intensional.
Your committee will stammer
Over Montague grammar
Not admitting it's incomprehensible.
------------------------------
Date: Fri, 26 Sep 86 11:40 PDT
From: JREECE%sc.intel.com@CSNET-RELAY.ARPA
Subject: Turbo Prolog - Yet Another Opinion
Although Turbo Prolog has been characterized by some wags as a "brain-dead
implementation" I think its mixture of strengths and weaknesses would be more
accurately described as those of an idiot savant. Some of the extensions,
such as the built-in string editor predicates, are positively serendipitous,
and you get most of the development time advantages of a fifth generation
language for a conventional application plus good runtime performance for
only $70. On the other hand, one tires quickly of writing NP-incomplete sets
of type declarations which are unnecessary in any other implementation....
If nothing else, for $70 you can prototype something that can be used to justify
spending $700 for a real PC Prolog compiler, or $18,000 for a VAX
implementation.
John Reece
Intel
------------------------------
Date: Fri, 26 Sep 86 18:52:26 CDT
From: neves@ai.wisc.edu (David M. Neves)
Reply-to: neves@ai.wisc.edu (David M. Neves)
Subject: Re: Xerox vs Symbolics -- Reference counts vs Garbage collection
When I was using MIT Lisp Machines (soon to become Symbolics) years
ago nobody used the garbage collector because it slowed down the
machine and was somewhat buggy. Instead people operated for hours/days
until they ran out of space and then rebooted the machine. The only
time I turned on the garbage collector was to compute 10000 factorial.
Do current Symbolics users use the garbage collector?
"However, it is apparent that reference counters will never
reclaim circular list structure."
This is a common complaint about reference counters. However I don't
believe there is very many circular data structures in real Lisp code.
Has anyone looked into this? Has any Xerox user run out of space
because of circular data structures in their environment?
--
David Neves, Computer Sciences Department, University of Wisconsin-Madison
Usenet: {allegra,heurikon,ihnp4,seismo}!uwvax!neves
Arpanet: neves@rsch.wisc.edu
------------------------------
Date: 26 Sep 86 15:35:00 GMT
From: princeton!siemens!steve@CAIP.RUTGERS.EDU
Subject: Garb Collect Symb vs Xerox
I received mail that apparently also went onto the net, from Dan Hoey
(hoey@nrl-aic.ARPA). He discussed garbage collection in response to
my unsupported allegation that, "S[ymbolics] talks about their garbage
collection more, but X[erox]'s is better." I am glad to see someone
taking up an informed discussion in this area.
First, I briefly recap his letter, eliding (well-put) flames:
+ In the language of computer
+ science, Xerox reclaims storage using a ``reference counter''
+ technique, rather than a ``garbage collector.''
+ If we are to believe Xerox, the reference counter
+ technique is fundamentally faster, and reclaims acceptable amounts of
+ storage. However, it is apparent that reference counters will never
+ reclaim circular list structure. As a frequent user of circular list
+ structure (doubly-linked lists, anyone?), I find the lack tantamount to
+ a failure to reclaim storage.
+ I have never understood why Xerox continues to neglect to write a
+ garbage collector. It is not necessary to stop using reference counts,
+ but simply to have a garbage collector available for those putatively
+ rare occasions when they run out of memory.
+ Dan Hoey
Xerox's system is designed for highly interactive use on a personal
workstation (sound familiar?). They spread the work of storage reclamation
evenly throughout the computation by keeping reference counts. Note that
they have many extra tricks such as "References from the stack are not
counted, but are handled separately at "sweep" time; thus the vast majority
of data manipulations do not cause updates to [the reference counts]"
(Interlisp-D Reference Manual, October, 1985). Even if this scheme were
to use a greater total amount of CPU time than typical garbage collection,
it would remain more acceptable for use on a personal, highly interactive
workstation. I have no idea how it can be compared to Symbolics for overall
performance, without comparing the entire Interlisp vs. Zetalisp systems.
Nevertheless, I can say that my experience is that Interlisp runs a "G.C."
every few seconds and it lasts, subjectively, an eyeblink. Occasionally
I get it to take longer, for example when I zero my pointers to 1500 arrays
in one fell swoop. I have some figures from one application, too. An
old, shoddy implementation ran 113 seconds CPU and 37.5 seconds GC (25% GC).
A decent implementation of the same program, running a similar problem twice,
got 145 seconds CPU, but 10.8 and 20.3 seconds GC (6.9% and 12% GC). (The
good implementation still doesn't have a good hashing function so it's still
slower.) I cannot claim that these figures are representative. I have
heard horror stories about other Lisps' GCs,
although I don't have any feel for Symbolics's "Ephemeral GC".
I have a strong feeling Xerox has other tricks besides the one about the
stack, which they don't want to tell anyone. I know they recently fixed
the reference counter from counting 16 or more references as "infinity"
(and thus never reclaimable) to an overflow scheme where the reference
count gets squirreled away somewhere else when it gets bigger.
Finally, normally the amount of unreclaimed garbage (e.g. circular lists)
grows much slower than memory fragments, so you have to rebuild your
world before unreclaimed garbage becomes a problem anyway.
Postfinally, Xerox makes a big deal that their scheme takes time proportional
to the number of objects reclaimed, while traditional systems take time
proportional to the number of objects allocated. I think Symbolics's
ephemeral scheme is a clever way to consider only subsets of the universe
of allocated objects, that are most likely to have garbage. I wish I knew
whether it is a band-aid or an advance in the state-of-the-art.
Absolutely ultimately, "traditional" GC I refer to, is known as "mark-
and-sweep".
Steve Clark {topaz or ihnp4}!princeton!siemens!steve
------------------------------
Date: Mon, 29 Sep 86 14:07:12 edt
From: segall@caip.rutgers.edu (Ed Segall)
Subject: Re: Emergent Consciousness
Why must we presume that the seat of consciousness must be in the form
of neural "circuits"? What's to prevent it from being a symbolic,
logical entity, rather than a physical entity? After all, the "center
of control" of most computers is some sort of kernal program, running
on the exact same hardware as the other programs. (Don't try to push
the analogy too far, you can probably find a hole in it.) Perhaps the
hierarchical system referred to is also not structural.
Might the brain operate even more like a conventional computer than we
realize, taking the role of an extremely sophisticated
(self-modifying) interpreter? The "program" that is interpreted is the
pattern of firings occurring at any given time. If this is so, then
moment-to-moment thought is almost completely in terms of the dynamic
information contained in neural signals, rather than the quasi-static
information contained in neural interconnections. The neurons simply
serve to "run" the thoughts. This seems obvious to me, since I am
assuming that neural firings can process information much faster than
structural changes in neurons.
I'd be interested to know about what rate neuron firings occur in the
brain, and if anyone has an intelligent guess as to how much
information can be stored at once in the "dynamic" form of firings
rather than the "static" form of interconnections.
I apologize in advance if what I suggest goes against well-understood
knowlege (not theory) of how the brain operates. My information is
from the perspective of a lay person, not a cognitive scientist.
------------------------------
Date: Mon, 29 Sep 86 09:34:01 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: Consciousness as bureaucracy
Ken Laws' analogy between Bureaucracy and Man--more precisely, Man's
Mind--has been anticipated by Marvin Minsky. I do not have the reference;
I think it was a rather broad article in a general science journal.
As I recall, the theory that Minsky proposed lay somewhere between the
lay concept of self and the Zen concept. It seemed to suggest that
consciousness is an illusion to itself, but a genuine and observable
phenomenon to an outside observer, characterizable with the metaphor
of bureaucracy. Perhaps some Ailist reader can identify the article.
Emergent consciousness has always been a hope of A.I. I side with
those who suggest that consciousness depends on contact with the world
... even though I know some professors who seem to be counter-examples!
:-)
------------------------------
Date: 2 Oct 86 17:14:00 EDT
From: "FENG, THEO-DRIC" <theo@ari-hq1.ARPA>
Reply-to: "FENG, THEO-DRIC" <theo@ari-hq1.ARPA>
Subject: Perception
I just ran across the following report and thought it might contribute some-
thing to the discussion on the "perception" of reality. (I'll try to summarize
the report where I can.)
according to
Thomas Saum in the
German Research Service,
Special Press Reports,
Vol. II, No. 7/86
A group of biologists in Bremen University has been furthering the theory devel-
oped by Maturana and Varela (both from Chile) in the late '70's, that the brain
neither reflects nor reproduces reality. They suggest that the brain creates its
own reality.
Gerhard Roth, a prof. of behavioral physiology at Bremen (with doctorates in
philosophy and biology), has written several essays on the subject. In one, he
...writes that in the past the "aesthesio-psychological perspective"
of the psychomatic problem was commonly held by both laypersons
and scientists. This train of thought claims that the sensory organs
reporduce the world at least partially and convey this image to the
brain, where it is then reassembled ("reconstructed") in a uniform
concept. In other words, this theory maintains that the sense organs
are the brain's gateway to the world.
In order to illustrate clearly the incorrectness of this view,
Roth suggests that the perspectives be exchanged: if one looks at
the problem of perception from the brain's angle, instead of the
sense organs, the brain merely receives uniform and basically homo-
geneous bioelectric signals from the nerve tracks. It is capable of
determining the intensity of the sensory agitation by the frequency
of these signals, but this is all it can do. The signals provide no
information on the quality of the stimulation, for instance, on whe-
ther an object is red or green. Indeed, they do not even say any-
thing about the modality of the stimulus, i.e. whether it is an
optical, acoustical, or chemical stimulation.
The constructivists [as these new theoreticians are labeled],
believe that the brain is a self-contained system. Its only access
to the world consists of the uniform code of the nerve signals which
have nothing in common with the original stimuli. Since the brain
has no original image, it cannot possibly "reporduce" reality; it
has to create it itself. "It (the brain) has to reconstruct the di-
versity of the outside owrld from the uniform language of the neu-
rons", Roth claims. The brain accomplishses this task by "interpret-
ing itself", i.e. by construing what is going on inside itself.
Thus, the brain "draws conclusions" from the degree to which it is
agitated by the modality of the original stimulus: all neuronal im-
pulses reaching the occipital cortex, for example, are visual im-
pressions.
This isolated nature of the brain and its reality, however, are
by no means a blunder on the part of nature; indeed, they are not
even a necessary evil, Roth explains. On the contrary, it is an
adaptive advantage acquired by more highlly developed creatures dur-
ing the course of their phylogenic development. If the brain had di-
rect access to the environment, Roth argues, then one and the same
stimulus would necessarily always result in one and the same reac-
tion by the organizsm. Since, however, the human brain has retained
a certain amount of creative scope for its reconstruction of reality,
it is in a position to master complicated stiuations and adapt itself
to unforeseen circumstances.
Only in this way is it possible to recognize an object in differ-
ent light intensities, from a new angle of vision, or at a distance.
Even experiments with "reversal spectacles" demonstrate man's powers
of adaptation in interpreting reality: after a little while, test
persons, who see the world upside down with special glasses, simply
turn their environment around again in their "mind". When, after a
few days, they remove the spectacles, the "real" world suddenly seems
to be standing on its head.
This mobility and adaptability on the part of our perceptive fa-
culties were obviously much more important for the evolution of more
highly developed vertebrates than was a further intensification of
the signal input by the sense organs. The million fibers in man's
optic nerve are only double the number of a frog's; the human brain,
on the other hand, has one hundred thousand times more nerve cells
than a frog brain. But first and foremost, the "reality workshop",
i.e., the cerebral area not tied to specific sense, has expanded
during the evolution of man's brain, apparently to the benefit of
our species.
Contact: Prof. Dr. Gerhard Roth, Forschungsschwerpunkt Biosystemforschung,
Universitat [note: umlaut the 'a'] Bremen, Postfach 330 440,
D-2800 Bremen 33, West Germany.
[conveyed by Theo@ARI]
------------------------------
End of AIList Digest
********************
∂06-Oct-86 0551 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #206
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 6 Oct 86 05:51:00 PDT
Date: Sun 5 Oct 1986 22:33-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #206
To: AIList@SRI-STRIPE
AIList Digest Monday, 6 Oct 1986 Volume 4 : Issue 206
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 27 Sep 86 14:20:21 GMT
From: princeton!mind!harnad@caip.rutgers.edu (Stevan Harnad)
Subject: Searle, Turing, Symbols, Categories
The following are the Summary and Abstract, respectively, of two papers
I've been giving for the past year on the colloquium circuit. The first
is a joint critique of Searle's argument AND of the symbolic approach
to mind-modelling, and the second is an alternative proposal and a
synthesis of the symbolic and nonsymbolic approach to the induction
and representation of categories.
I'm about to publish both papers, but on the off chance that
there is a still a conceivable objection that I have not yet rebutted,
I am inviting critical responses. The full preprints are available
from me on request (and I'm still giving the talks, in case anyone's
interested).
***********************************************************
Paper #1:
(Preprint available from author)
MINDS, MACHINES AND SEARLE
Stevan Harnad
Behavioral & Brain Sciences
20 Nassau Street
Princeton, NJ 08542
Summary and Conclusions:
Searle's provocative "Chinese Room Argument" attempted to
show that the goals of "Strong AI" are unrealizable.
Proponents of Strong AI are supposed to believe that (i) the
mind is a computer program, (ii) the brain is irrelevant,
and (iii) the Turing Test is decisive. Searle's point is
that since the programmed symbol-manipulating instructions
of a computer capable of passing the Turing Test for
understanding Chinese could always be performed instead by a
person who could not understand Chinese, the computer can
hardly be said to understand Chinese. Such "simulated"
understanding, Searle argues, is not the same as real
understanding, which can only be accomplished by something
that "duplicates" the "causal powers" of the brain. In the
present paper the following points have been made:
1. Simulation versus Implementation:
Searle fails to distinguish between the simulation of a
mechanism, which is only the formal testing of a theory, and
the implementation of a mechanism, which does duplicate
causal powers. Searle's "simulation" only simulates
simulation rather than implementation. It can no more be
expected to understand than a simulated airplane can be
expected to fly. Nevertheless, a successful simulation must
capture formally all the relevant functional properties of a
successful implementation.
2. Theory-Testing versus Turing-Testing:
Searle's argument conflates theory-testing and Turing-
Testing. Computer simulations formally encode and test
models for human perceptuomotor and cognitive performance
capacities; they are the medium in which the empirical and
theoretical work is done. The Turing Test is an informal and
open-ended test of whether or not people can discriminate
the performance of the implemented simulation from that of a
real human being. In a sense, we are Turing-Testing one
another all the time, in our everyday solutions to the
"other minds" problem.
3. The Convergence Argument:
Searle fails to take underdetermination into account. All
scientific theories are underdetermined by their data; i.e.,
the data are compatible with more than one theory. But as
the data domain grows, the degrees of freedom for
alternative (equiparametric) theories shrink. This
"convergence" constraint applies to AI's "toy" linguistic
and robotic models as well, as they approach the capacity to
pass the Total (asympototic) Turing Test. Toy models are not
modules.
4. Brain Modeling versus Mind Modeling:
Searle also fails to note that the brain itself can be
understood only through theoretical modeling, and that the
boundary between brain performance and body performance
becomes arbitrary as one converges on an asymptotic model of
total human performance capacity.
5. The Modularity Assumption:
Searle implicitly adopts a strong, untested "modularity"
assumption to the effect that certain functional parts of
human cognitive performance capacity (such as language) can
be be successfully modeled independently of the rest (such
as perceptuomotor or "robotic" capacity). This assumption
may be false for models approaching the power and generality
needed to pass the Total Turing Test.
6. The Teletype versus the Robot Turing Test:
Foundational issues in cognitive science depend critically
on the truth or falsity of such modularity assumptions. For
example, the "teletype" (linguistic) version of the Turing
Test could in principle (though not necessarily in practice)
be implemented by formal symbol-manipulation alone (symbols
in, symbols out), whereas the robot version necessarily
calls for full causal powers of interaction with the outside
world (seeing, doing AND linguistic understanding).
7. The Transducer/Effector Argument:
Prior "robot" replies to Searle have not been principled
ones. They have added on robotic requirements as an
arbitrary extra constraint. A principled
"transducer/effector" counterargument, however, can be based
on the logical fact that transduction is necessarily
nonsymbolic, drawing on analog and analog-to-digital
functions that can only be simulated, but not implemented,
symbolically.
8. Robotics and Causality:
Searle's argument hence fails logically for the robot
version of the Turing Test, for in simulating it he would
either have to USE its transducers and effectors (in which
case he would not be simulating all of its functions) or he
would have to BE its transducers and effectors, in which
case he would indeed be duplicating their causal powers (of
seeing and doing).
9. Symbolic Functionalism versus Robotic Functionalism:
If symbol-manipulation ("symbolic functionalism") cannot in
principle accomplish the functions of the transducer and
effector surfaces, then there is no reason why every
function in between has to be symbolic either. Nonsymbolic
function may be essential to implementing minds and may be a
crucial constituent of the functional substrate of mental
states ("robotic functionalism"): In order to work as
hypothesized, the functionalist's "brain-in-a-vat" may have
to be more than just an isolated symbolic "understanding"
module -- perhaps even hybrid analog/symbolic all the way
through, as the real brain is.
10. "Strong" versus "Weak" AI:
Finally, it is not at all clear that Searle's "Strong
AI"/"Weak AI" distinction captures all the possibilities, or
is even representative of the views of most cognitive
scientists.
Hence, most of Searle's argument turns out to rest on
unanswered questions about the modularity of language and
the scope of the symbolic approach to modeling cognition. If
the modularity assumption turns out to be false, then a
top-down symbol-manipulative approach to explaining the mind
may be completely misguided because its symbols (and their
interpretations) remain ungrounded -- not for Searle's
reasons (since Searle's argument shares the cognitive
modularity assumption with "Strong AI"), but because of the
transdsucer/effector argument (and its ramifications for the
kind of hybrid, bottom-up processing that may then turn out
to be optimal, or even essential, in between transducers and
effectors). What is undeniable is that a successful theory
of cognition will have to be computable (simulable), if not
exclusively computational (symbol-manipulative). Perhaps
this is what Searle means (or ought to mean) by "Weak AI."
*************************************************************
Paper #2:
(To appear in: "Categorical Perception"
S. Harnad, ed., Cambridge University Press 1987
Preprint available from author)
CATEGORY INDUCTION AND REPRESENTATION
Stevan Harnad
Behavioral & Brain Sciences
20 Nassau Street
Princeton NJ 08542
Categorization is a very basic cognitive activity. It is
involved in any task that calls for differential responding,
from operant discrimination to pattern recognition to naming
and describing objects and states-of-affairs. Explanations
of categorization range from nativist theories denying that
any nontrivial categories are acquired by learning to
inductivist theories claiming that most categories are learned.
"Categorical perception" (CP) is the name given to a
suggestive perceptual phenomenon that may serve as a useful
model for categorization in general: For certain perceptual
categories, within-category differences look much smaller
than between-category differences even when they are of the
same size physically. For example, in color perception,
differences between reds and differences between yellows
look much smaller than equal-sized differences that cross
the red/yellow boundary; the same is true of the phoneme
categories /ba/ and /da/. Indeed, the effect of the category
boundary is not merely quantitative, but qualitative.
There have been two theories to explain CP effects. The
"Whorf Hypothesis" explains color boundary effects by
proposing that language somehow determines our view of
reality. The "motor theory of speech perception" explains
phoneme boundary effects by attributing them to the patterns
of articulation required for pronunciation. Both theories
seem to raise more questions than they answer, for example:
(i) How general and pervasive are CP effects? Do they occur
in other modalities besides speech-sounds and color? (ii)
Are CP effects inborn or can they be generated by learning
(and if so, how)? (iii) How are categories internally
represented? How does this representation generate
successful categorization and the CP boundary effect?
Some of the answers to these questions will have to come
from ongoing research, but the existing data do suggest a
provisional model for category formation and category
representation. According to this model, CP provides our
basic or elementary categories. In acquiring a category we
learn to label or identify positive and negative instances
from a sample of confusable alternatives. Two kinds of
internal representation are built up in this learning by
"acquaintance": (1) an iconic representation that subserves
our similarity judgments and (2) an analog/digital feature-
filter that picks out the invariant information allowing us
to categorize the instances correctly. This second,
categorical representation is associated with the category
name. Category names then serve as the atomic symbols for a
third representational system, the (3) symbolic
representations that underlie language and that make it
possible for us to learn by "description."
This model provides no particular or general solution to the
problem of inductive learning, only a conceptual framework;
but it does have some substantive implications, for example,
(a) the "cognitive identity of (current) indiscriminables":
Categories and their representations can only be provisional
and approximate, relative to the alternatives encountered to
date, rather than "exact." There is also (b) no such thing
as an absolute "feature," only those features that are
invariant within a particular context of confusable
alternatives. Contrary to prevailing "prototype" views,
however, (c) such provisionally invariant features MUST
underlie successful categorization, and must be "sufficient"
(at least in the "satisficing" sense) to subserve reliable
performance with all-or-none, bounded categories, as in CP.
Finally, the model brings out some basic limitations of the
"symbol-manipulative" approach to modeling cognition,
showing how (d) symbol meanings must be functionally
anchored in nonsymbolic, "shape-preserving" representations
-- iconic and categorical ones. Otherwise, all symbol
interpretations are ungrounded and indeterminate. This
amounts to a principled call for a psychophysical (rather
than a neural) "bottom-up" approach to cognition.
------------------------------
Date: Mon 29 Sep 86 09:55:11-PDT
From: Pat Hayes <PHayes@SRI-KL.ARPA>
Subject: Searle's logic
I try not to get involved in these arguments, but bruce krulwich's assertion
that Searle 'bases all his logic on' the binary nature of computers is seriously
wrong. We could have harware which worked with direct, physical, embodiments
of all of Shakespeare, and Searles arguments would apply to it just as well.
What bothers him ( and many other philosophers ) is the idea that the machine
works by manipulating SYMBOLIC descriptions of its environment ( or whatever it
happens to be thinking about ). It's the internal representation idea, which
we AIers take in with our mothers milk, which he finds so silly and directs his
arguments against.
Look, I also don't think there's any real difference between a human's knowledge
of a horse and machine's manipulation of the symbol it is using to represent it.
But Searle has some very penetrating arguments against this idea, and one doesnt
make progress by just repeating one's intuitions, one has to understand his
arguments and explain what is wrong with them. Start with the Chinese room, and
read all his replies to the simple counterarguments as well, THEN come back and
help us.
Pat Hayes
------------------------------
Date: 1 Oct 86 18:25:16 GMT
From: cbatt!cwruecmp!cwrudg!rush@ucbvax.Berkeley.EDU (rush)
Subject: Re: Searle, Turing, Symbols, Categories (Question not comment)
In article <158@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>6. The Teletype versus the Robot Turing Test:
>
>For example, the "teletype" (linguistic) version of the Turing...
> whereas the robot version necessarily
>calls for full causal powers of interaction with the outside
>world (seeing, doing AND linguistic understanding).
>
Uh...I never heard of the "robot version" of the Turing Test,
could someone please fill me in?? I think that understanding
the reasons for such a test would help me (I make
no claims for anyone else) make some sense out of the rest
of this article. In light of my lack of knowledge, please forgive
my presumption in the following comment.
>7. The Transducer/Effector Argument:
>
>A principled
>"transducer/effector" counterargument, however, can be based
>on the logical fact that transduction is necessarily
>nonsymbolic, drawing on analog and analog-to-digital
>functions that can only be simulated, but not implemented,
>symbolically.
>
[ I know I claimed no commentary, but it seems that this argument
depends heavily on the meaning of the term "symbol". This could
be a problem that only arises when one attempts to implement some
of the stranger possibilities for symbolic entities. ]
Richard Rush - Just another Jesus freak in computer science
decvax!cwruecmp!cwrudg!rush
------------------------------
Date: 2 Oct 86 16:05:28 GMT
From: princeton!mind!harnad@caip.rutgers.edu (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories (Question not comment)
In his commentary-not-reply to my <158@mind.UUCP>, Richard Rush
<150@cwrudge.UUCP> asks:
(1)
> I never heard of the "robot version" of the Turing Test,
> could someone please fill me in?
He also asks (in connection with my "transducer/effector" argument)
about the analog/symbolic distinction:
(2)
> I know I claimed no commentary, but it seems that this argument
> depends heavily on the meaning of the term "symbol". This could
> be a problem that only arises when one attempts to implement some
> of the stranger possibilities for symbolic entities.
In reply to (1): The linguistic version of the turing test (turing's
original version) is restricted to linguistic interactions:
Language-in/Language-out. The robotic version requires the candidate
system to operate on objects in the world. In both cases the (turing)
criterion is whether the system can PERFORM indistinguishably from a human
being. (The original version was proposed largely so that your
judgment would not be prejudiced by the system's nonhuman appearance.)
On my argument the distinction between the two versions is critical,
because the linguistic version can (in principle) be accomplished by
nothing but symbols-in/symbols-out (and symbols in between) whereas
the robotic version necessarily calls for non-symbolic processes
(transducer, effector, analog and A/D). This may represent a
substantive functional limitation on the symbol-manipulative approach
to the modeling of mind (what Searle calls "Strong AI").
In reply to (2): I don't know what "some of the stranger possibilities
for symbolic entities" are. I take symbol-manipulation to be
syntactic: Symbols are arbitrary tokens manipulated in accordance with
certain formal rules on the basis of their form rather than their meaning.
That's symbolic computation, whether it's done by computer or by
paper-and-pencil. The interpretations of the symbols (and indeed of
the manipulations and their outcomes) are ours, and are not part of
the computation. Informal and figurative meanings of "symbol" have
little to do with this technical concept.
Symbols as arbitrary syntactic tokens in a formal system can be
contrasted with other kinds of objects. The ones I singled out in my
papers were "icons" or analogs of physical objects, as they occur in
the proximal physical input/output in transduction, as they occur in
the A-side of A/D and D/A transformations, and as they may function in
any part of a hybrid system to the extent that their functional role
is not merely formal and syntactic (i.e., to the extent that their
form is not arbitrary and dependent on convention and interpretation
to link it to the objects they "stand for," but rather, the link is
one of physical resemblance and causality).
The category-representation paper proposes an architecture for such a
hybrid system.
Stevan Harnad
princeton!mind!harnad
------------------------------
End of AIList Digest
********************
∂07-Oct-86 1248 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #207
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 7 Oct 86 12:48:00 PDT
Date: Tue 7 Oct 1986 09:17-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #207
To: AIList@SRI-STRIPE
AIList Digest Tuesday, 7 Oct 1986 Volume 4 : Issue 207
Today's Topics:
Seminars - Cross-Talk in Mental Operations (UCB) &
Deductive Databases (UPenn) &
Concept Acquisition in Noisy Environments (SRI) &
Prolog without Horns (CMU) &
Knowledge Engineering and Ontological Structure (SU),
Conferences - AAAI-87 Tutorials &
1st Conf. on Neural Networks &
Workshop on Qualitative Physics
----------------------------------------------------------------------
Date: Mon, 6 Oct 86 15:38:02 PDT
From: admin%cogsci.Berkeley.EDU@berkeley.edu (Cognitive Science Program)
Subject: Seminar - Cross-Talk in Mental Operations (UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237A
Tuesday, October 14, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
2515 Tolman Hall
``Cross-Talk and Backward Processing in Mental Operations''
Daniel Kahneman
Psychology Department
There are many indications that we only have imperfect
control of the operations of our mind. It is common to compute
far more than is necessary for the task at hand. An operation
of cleaning-up and inhibition of inappropriate responses is
often required, and this operation is often only partially suc-
cessful. For example, we cannot stop ourselves from reading
words that we attend to; when asked to assess the similarity of
two objects in a specified attribute we apparently compute many
similarity relations in addition to the requisite one. The
prevalence of such cross-talk has significant implications for
a psychologically realistic notion of meaning for the interpre-
tation of incoherence in judgments.
A standard view of cognitive function is that the objects
and events of expeience are assimilated, more or less success-
fully, to existing schemas and expectations. Some perceptual
and cognitive phenomena seem to fit another model, in which
objects and events elicit their own context and define their
own alternatives. Surprise, for example, is better viewed as a
failure to make sense of an event post hoc than as a violation
of expectations. Some rules by which events evoke counterfac-
tual alternatives to themselves will be described.
------------------------------
Date: Sun, 5 Oct 86 11:15 EDT
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Deductive Databases (UPenn)
3:00pm, Tuesday, October 7, 1986
23 Moore School, University of Pennsylvania
EFFICIENT DEDUCTIVE DATABASES
WILL THEY EVER BE CONSTRUCTED?
Tomasz Imielinski
Rutgers University
The area of deductive databases is a rapidly growing field concerned with
enhancing traditional relational databases with automated deduction
capabilities. Because of the large amounts of data involved here the
complexity issues become critical. We present a number of results related to
the complexity of query processing in the deductive databases, both with
complete and incomplete information.
In an attempt to answer the question of whether efficient deductive databases
will ever be constructed we demonstrate an idea of the "deductive database of
the future". In such a system the concept of an answer to a query is tailored
to the various limitations of computational resources.
------------------------------
Date: Mon 6 Oct 86 16:25:34-PDT
From: Joani Ichiki <ICHIKI@SRI-STRIPE.ARPA>
Subject: Seminar - Concept Acquisition in Noisy Environments (SRI)
L. Saitta (Dipartimento di Informatica, Universita di Torino, Italy)
will present his talk entitled, "AUTOMATED CONCEPT ACQUISITION IN
NOISY ENVIRONMENTS," 10/7/86 in EK242 at 11:00am. Abstract follows.
This paper presents a system which performs automated concept
acquisition from examples and has been especially designed to work in
errorful and noisy environments.
The adopted learning methodology is aimed to the target problem of
finding discriminant descriptions of a given set of concepts and both
examples and counterexamples are used.
The learning knowledge is expressed in the form of production rules,
organized into separate clusters, linked together in a graph
structure; the condition part of the rules, corresponding to
descriptions of relevant aspects of the concepts, is expressed by
means of a first order logic based language, enriched with constructs
suitable to handle uncertainty and vagueness and to increase
readability by a human user. A continuous-valued semantics is
associated to this language and each rule is affected by a certainty
factor.
Learning is considered as a cyclic process of knowledge extraction,
validation and refinement; the control of the cycle is left to the
teacher.
Knowledge extraction proceeds through a process of specialization,
rather than generalization, and utilizes a technique of problem
reduction to contain the computational complexity. Moreover, the
search strategy is strongly focalized by means of task-oriented but
domain-independent heuristics, trying to emulate the learning
mechanism of a human being, faced to find discrimination rules from a
set of examples.
Several criteria are proposed for evaluating the acquired knowledge;
these criteria are used to guide the process of knowledge refinement.
The methodology has been tested on a problem in the field of speech
recognition and the obtained experimental results are reported and
discussed.
------------------------------
Date: 6 October 1986 1411-EDT
From: Peter Andrews@A.CS.CMU.EDU
Subject: Seminar - Prolog without Horns (CMU)
The following talk will be given in the Seminar on Automated
Reasoning Wednesday, Oct. 15, at 4:30p.m. in room PH125C. The talk
is independent of preceding material in the seminar.
Prolog without Horns
D. W. Loveland
An extension to Prolog is defined that handles non-Horn clause sets
(programs) in a manner closer to standard Prolog than previously
proposed. Neither the negation symbol or a symbol for false are
formally introduced in the system, although the system is
conjectured to be propositionally complete. The intention of the
extension is to provide processing of "nearly Horn" programs with
minimal deviation from the Prolog format. Although knowledge of
Prolog is not essential, some prior exposure to Prolog will be helpful.
------------------------------
Date: Mon 6 Oct 86 16:55:52-PDT
From: Lynne Hollander <HOLLANDER@SUMEX-AIM.ARPA>
Subject: Seminar - Knowledge Engineering and Ontological Structure (SU)
SIGLUNCH
Title: KNOWLEDGE ENGINEERING AS THE INVESTIGATION OF
ONTOLOGICAL STRUCTURE
Speaker: Michael J. Freiling
Computer Research Laboratory
Tektronix Laboratories
Place: Chemistry Gazebo
Time: 12:05-1:15, Friday, October 10
Experience has shown that much of the difficulty of learning to build
knowledge-based systems lies in designing representation structures that
adequately capture the necessary forms of knowledge. Ontological analysis
is a method we have found quite useful at Tektronix for analyzing and
designing knowledge-based systems. The basic approach of ontological
analysis is a step-by-step construction of knowledge structures beginning
with simple objects and relationships in the task domain, and continuing
through representations of state, state transformations, and heuristics
for selecting transformations. Formal tools that can be usefully employed
in ontological analysis include domain equations, semantic grammars, and
full-scale specification languages. The principles and tools of
ontological analysis are illustrated with actual examples from
knowledge-based systems we have built or analyzed with this method.
------------------------------
Date: Mon 29 Sep 86 10:39:41-PDT
From: William J. Clancey <CLANCEY@SUMEX-AIM.ARPA>
Subject: AAAI-87 Tutorials
AAAI-87 Tutorials -- Request for Proposals
Tutorials will be presented at AAAI-87/Seattle on Monday, Tuesday, and
Thursday, July 13, 14, and 16. Anyone interested in presenting a tutorial
on a new or standard topic should contact the Tutorial Chair, Bill Clancey.
Topic suggestions from tutorial attendees are also welcome.
Potential speakers should submit a brief resume covering relevant background
(primarily teaching experience) and any available examples of work (ideally,
a published tutorial-level article on the subject). In addition, those
people suggesting a new or revised topic should offer a 1-page summary of
the idea, outlining the proposed subject and depth of coverage, identifying
the necessary background, and indicating why it is felt that the topic would
be well attended.
With regard to new courses, please keep in mind that tutorials are intended
to provide dissemination of reasonably well-agreed-upon information, that
is, there should be a substantial body of accepted material. We especially
encourage submission of proposals for new advanced topics, which in 1986
included "Qualitiative Simulation," "AI Machines," and "Uncertainty
Management."
Decisions about topics and speakers will be made by November 1. Speakers
should be prepared to submit completed course material by December 15.
Bill Clancey
Stanford Knowledge Systems Laboratory
701 Welch Road, Building C
Palo Alto, CA 94304
Clancey@SUMEX
------------------------------
Date: Tue, 30 Sep 86 11:43:56 pdt
From: mikeb@nprdc.arpa (Mike Blackburn)
Subject: 1st Conf. on Neural Networks
CONFERENCE ANNOUNCEMENT: FIRST ANNUAL
INTERNATIONAL CONFERENCE ON NEURAL NETWORKS
San Diego, California
21-24 June 1987
The San Diego IEEE Section welcomes neural network
enthusiasts in industry, academia, and government world-wide
to participate in the inaugural annual ICNN conference in
San Diego.
Papers are solicited on the following topics:
* Network Architectures * Learning Algorithms * Self-
Organization * Adaptive Resonance * Dynamical Network
Stability * Neurobiological Connections * Cognitive
Science Connections * Electrical Neurocomputers * Opti-
cal Neurocomputers * Knowledge Processing * Vision *
Speech Recognition & Synthesis * Robotics * Novel
Applications
Contributed Papers: Extended Abstract should be submitted by
1 February 1987 for Conference Presentation. The Abstract
must be single spaced, three to four pages on 8.5 x 11 inch
paper with 1.5 inch margins. Abstracts will be carefully
refereed. Accepted abstracts will be distributed at the
conference. Final Papers due 1 June 1986.
FINAL RELEASE OF ABSTRACTS AND PAPERS WITH RESPECT TO
PROPRIETARY RIGHTS AND CLASSIFICATION MUST BE OBTAINED
BEFORE SUBMITTAL.
Address all Corresspondence to: Maureen Caudill - ICNN
10615G Tierrasanta Blvd. Suite 346, San Diego, CA 92124.
Registration Fee: $350 if received by 1 December 1986, $450
thereafter.
Conference Venue: Sheraton Harbor Island Hotel (approx. $95
- single), space limited, phone (619) 291-6400. Other lodg-
ing within 10 minutes.
Tutorials and Exhibits: Several Tutorials are Planned. Ven-
dor Exhibit Space Available - make reservations early.
Conference Chairman: Stephen Grossberg
International Chairman: Teuvo Kohonen
Organizing Committee: Kunihiko Fukushima, Clark Guest,
Robert Hecht-Nielsen, Morris Hirsch, Bart Kosko (Chairman
619-457-5550), Bernard Widrow.
September 30, 1986
------------------------------
Date: 5 Oct 1986 13:16 EDT (Sun)
From: "Daniel S. Weld" <WELD%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Workshop on Qualitative Physics
Call for Participation
Workshop on Qualitative Physics
May 27-29, 1987
Urbana, Illinois
Sponsored by:
the American Association for Artificial Intelligence
and
Qualitative Reasoning Group
University of Illinois at Urbana-Champaign
Organizing Committee:
Ken Forbus (University of Illinois)
Johan de Kleer (Xerox PARC)
Jeff Shrager (Xerox PARC)
Dan Weld (MIT AI Lab)
Objectives:
Qualitative Physics, the subarea of artificial intelligence concerned with
formalizing reasoning about the physical world, has become an important and
rapidly expanding topic of research. The goal of this workshop is to
provide an opportunity for researchers in the area to communicate results
and exchange ideas. Relevant topics of discussion include:
-- Foundational research in qualitative physics
-- Implementation techniques
-- Applications of qualitative physics
-- Connections with other areas of AI
(e.g., machine learning, robotics)
Attendance: Attendence at the workshop will be limited in order to maximize
interaction. Consequently, attendence will be by invitation only. If you
are interested in attending, please submit an extended abstract (no more
than six pages) describing the work you wish to present. The extended
abstracts will be reviewed by the organizing committee. No proceedings will
be published; however, a selected subset of attendees will be invited to
contribute papers to a special issue of the International Journal of
Artificial Intelligence in Engineering.
Requirements: The deadline for submitting extended abstracts is February
10th. On-line submissions are not allowed; hard copy only please. Since
no proceedings will be produced, abstracts describing papers submitted to
AAAI-87 are acceptable. Invitations will be sent out on March 1st. Please
send 6 copies of your extended abstracts to:
Kenneth D. Forbus
Qualitative Reasoning Group
University of Illinois
1304 W. Springfield Avenue
Urbana, Illinois, 61801
------------------------------
End of AIList Digest
********************
∂09-Oct-86 0301 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #208
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 9 Oct 86 03:01:22 PDT
Date: Thu 9 Oct 1986 00:23-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #208
To: AIList@SRI-STRIPE
AIList Digest Thursday, 9 Oct 1986 Volume 4 : Issue 208
Today's Topics:
Bibliography - News and Recent Articles
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: News and Recent Articles
%A Paul A. Eisenstein
%T Detroit Finds Robots Aren't Living Up to Expectations
%J Investor's Daily
%D April 21, 1986
%P 12
%K AI07 Chrysler General Motors AA25
%X Chrysler said that automation was one of the major reasons productivity
doubled since 1980. GM's Lake Orion, a "factory of the future" with 157
automated robots instead of providing the best quality and productivity of
any GM plant is providing the lowest.
Two other plants have been giving GM the same problems.
%A Mary Petrosky
%T Expert Software Aids Large System Design
%J InfoWorld
%D FEB 17, 1986
%V 8
%N 7
%P 1+
%K AA08 AI01 H01 AT02 AT03 Arthur Young Knowledge Ware
%X Knowledge-Ware is selling the Information Engineering Workbench
which provides tools to support developing business programs. It has
features for supporting entity diagrams, data flow diagrams, etc. I
cannot find any indication from this article where AI is actually
used.
%A John Gantz
%T No Market Developing for Artificial Intelligence
%J InfoWorld
%D FEB 17, 1986
%V 8
%N 7
%P 27
%K AT04 AT14
%X D. M. Data predicts that the market for AI software will be $605
million this year and $2.65 billion in 1990. Arthur D. Little says it
might be twice this. He argues that when you look at the companies, most
of them are selling primarily to research market and not to the commercial
data processing market. Intellicorp had 3.3 million in revenues for the 1984-
1985 fiscal year and it made a profit. However, a full third of its systems
go to academics and 20 percent goes to Sperry for use in its own AI labs.
%A Jay Eisenlohr
%T Bug Debate
%J InfoWorld
%D FEB 17, 1986
%V 8
%N 7
%P 58
%K AT13 AT12 Airus AI Typist AT03
%X Response to harsh review of AI Typist by Infoworld from an employee
of the company selling it.
%A Eddy Goldberg
%T AI offerings Aim to Accelerate Adoption of Expert Systems
%J Computerworld
%D MAY 26, 1986
%V 20
%N 21
%P 24
%K Teknowledge Carnegie Group Intel Hypercube Gold Hill Common Lisp AT02
H03 T03 T01
%X Teknowledge has rewritten S.1 in the C language. Intel has introduced
Concurrent Common Lisp for its hypercube based machine
%T New Products/Microcomputers
%J Computerworld
%D MAY 26, 1986
%V 20
%N 21
%P 94
%K AT04 AI06 H01 Digital Vision Computereyes
%X Digital Vision introduced Computereyes video acquisition system for IBM PC.
Cost is $249.95 without camera and $529.95 with one.
%T New Products/Software and Services
%J Computerworld
%D MAY 26, 1986
%V 20
%N 21
%P 90
%K T03 AT02
%X LS/Werner has introduced a package containg four expert system tools for
$1995. A guide to AI is also included.
%A Douglas Barney
%T AT&T Conversant Systems Unveils Voice Recognition Model
%J ComputerWorld
%D APR 21, 1986
%V 20
%N 16
%P 13
%K AI05 AT02
%X AT&T Conversant systems has two products to do speech recognition, the
Model 80 which handles 80 simultaneous callers for $50,000 to $100,000 while
the Model 32 costs between $25,000 and $50,000 and handles 32 simultaneous
callers. It handles "yes," "no" and the numbers zero through nine.
%A Charles Babcock
%A James Martin
%T MSA Users Give High Marks, Few Dollars to Information Expert
%J ComputerWorld
%D APR 21, 1986
%V 20
%N 16
%P 15
%K AA06 AT03
%X MSA has a product called Information Expert which integrates a variety
of business applications through a shared dictionary and also provides
reporting. However the 'expert system components' failed to live up
to the "standard definition of expert systems."
%A Alan Alper
%T IBM Trumpets Experimental Speech Recognition System
%J ComputerWorld
%D APR 21, 1986
%V 20
%N 16
%P 25+
%K AI05 H01 Dragon Systems Kurzweil Products
%X IBM's speech recognition system can recognize utterances in real time
from a 5000 word pre-programmed vocabulary and can transcribe sentences
with 95 per cent accuracy. The system may become a product. It can handle
office correspondence in its present form. The system requires that the
user speaks slowly and with pauses. The system runs on a PC/AT with specialized
speech recognizing circuits. Kurzweil Applied Intelligence has a system
with a 1000 word recognition system selling for $65,000 that has been delivered
to several hundred customers. They have working prototypes of systems with
5000 word vocabularies which requires only a 1/10 of a second pause. Dragon
Systems has a system that can recognize up to 1000 words.
%A Stephen F. Fickas
%T Automating the Transformational Development of Software
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1268-1277
%K AA08 Glitter package routing
%X Describes a system to automate the selection of transformations to
be applied in creating a program from a specification. Goes through an
example to route packages through a network consisting of binary trees.
%A Douglas R. Smith
%A Goirdon B. Kotik
%A Stephen J. Westwold
%T Research on Knowledge-Based Software Environments at Kestrel Institute
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1278-1295
%K AA08 CHI
%X Describes the CHI project. REFINE, developed by Reasoning Systems Inc.,
is based onthe principles and ideas demonstrated in the CHI prototype.
CHI has bootstrapped itself. This system is a transformation based
system. The specification language, V,
takes 1/5 to 1/10 the number
of lines as the program being specified if it was written in LISP.
%A Richard C. Waters
%T The Programmer's Apprentice: A Session with KBEmacs
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1296-1320
%K AA08 Ada Lisp
%X This system, which uses plans to work hand-in-hand with a programmer
in constructing a piece of software is now being used to work with
ADA programs. The example used is that of a simple report. Currently,
KBEmacs knows only a few dozen types of plans out of a few hundred to a
few thousand for real work. Some operations take five minutes, but it is
expected that a speedup by a factor of 30 could be done by straightforward
operations. It is currently 40,000 lines of LISP code.
%A David R. Barstow
%T Domain-Specifific Automatic Programming
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1321-1336
%K AA08 AA03 well-log Schlumberger-Doll
%X This system describes a system to write programs to do
well-log interpretation. This system contains knowledge about well-logs
as well as programming.
%A Robert Neches
%A William R. Swartout
%A Johanna D. Moore
%T Enhanced Maintenance and Explanation of Expert Systems Through Explicit
Models of Their Development
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1337-1350
%K AA08 AI01
%X Describes a system for applying various transformations to improve
readability of a LISP program. Also discusses techniques for providing
explanation of the operation of the LISP machine by looking at data
structures created as the expert system is built
%A Beth Adelson
%A Elliot Soloway
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1351-1360
%K AA08 AI08
%X discusses protocol analysis of designers designing software systems.
Tries to show the effect of previous experience in the domain on these
operations
%A Elaine Kant
%T Understanding Automating Algorithm Design
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1361-1374
%K AA08 AI08
%X protocol analysis on algorithm designers faced with the convex hull
problem. Discussion of AI programs to design algorithms.
%A David M. Steier
%A Elaine Kant
%T The Roles of Execution and Analysis in Design
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1375-1386
%K AA08
%A J. Doyle
%T Expert Systems and the Myth of Symbolic Reasoning
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1386-1390
%K AI01 O02
%X compares traditional application development software engineering approaches
with those taken by the AI community
%A P. A. Subrahmanyam
%T The "Software Engineering" of Expert Systems: Is Prolog Appropriate?
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1391-1400
%K T02 O02 AI01
%X discusses developing expert systems in PROLOG
%A Daniel G. Bobrow
%T If Prolog is the Answer, What is the Question? or What it Takes to
Support AI Programming Paradigms
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 1401-1408
%K T02 AI01
%T Japanese Urge Colleges to Teach Programmers
%J InfoWorld
%D April 14, 1986
%V 8
%N 15
%P 18
%K GA01
%X "A panel of experts at the Japanese Ministry of Education has urged that
enrollment in computer software-related departments at Japanese universities
and colleges be doubled by 1992. The panel hopes to ensure that more systems
engineers and software specialists are trained to offset the shortage of
Japanese programmers. An estimated 600,000 additional programmers will be
needed by 1990, the panel projected."
%T Germans Begin AI Work with $53 Million Grant
%J InfoWorld
%D April 14, 1986
%V 8
%N 15
%P 18
%K GA03
%K Siemens West Germany GA03 AT19
%X The Wester German government will be giving $53.8 million in grants for AI
research.
%T Resources
%J InfoWorld
%D April 14, 1986
%V 8
%N 15
%P 19
%X New newsletter: "AI capsule", costing $195 a year for 12 issues
Winters Group, Suite 920 Building, 14 Franklin Street, Rochester New York 14604
%J Electronic News
%V 32
%N 1603
%D MAY 26, 1986
%P 25
%K GA01 H02 T02 Mitsubishi
%X Mitsubishi Electric announces an AI workstation doing 40,000 Prolog Lips
costing $118.941.
%T Image-Processing Module Works like a VMEBUS CPU
%J Electronics
%D JUN 16, 1986
%P 74
%V 59
%N 24
%K AI06 AT02 Datacube VMEbus Analog Devices
%X Product Announcement: VMEbus CPU card containing a digital signal-processing
chip supporting 8 MIPS
%T Robot Info Automatically
%J IEEE Spectrum
%D JAN 1986
%V 23
%N 1
%P 96
%K AT09 AT02 AI07
%X Robotics database available on diskette of articles on robots.
Cost $90.00 per year, "Robotics Database, PO BOX 3004-17 Corvallis, Ore 97339
%A John A. Adams
%T Aerospace and Military
%J IEEE Spectrum
%D JAN 1986
%V 23
%N 1
%P 76-81
%K AA19 AI06 AI07 AA18 AI01
%X Darpa's Autonomous Land Vehicle succeeded in guiding itself at 5
kilometers per hour using a vision system along a paved road.
%A Richard L. Henneman
%A William B. Rouse
%T On Measuring the Complexity of Monitoring and Controlling Large-Scale
Systems
%J IEEE Transactions on Systems, Man and Cybernetics
%V SMC-16
%N 2
%D March/April 1986
%P 193-207
%K AI08 AA20
%X discusses the effect of number of levels of hierarchy, redundancy and
number of nodes on a display page on the ability of human operators to find
errors in a simulated system
%A G. R. Dattatreya
%A L. N. Kanal
%T Adaptive Pattern Recognition with Random Costs and Its Applications to
Decision Trees
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 208-218
%K AI06 AA01 AI04 AI01 clustering spina bifida bladder radiology
%X applies clustering algorithm to results of reading radiographs of
the bladder. The system was able to determine clusters that corresponded
to those of patients with spina bifida.
%A Klaus-Peter Adlassnig
%T Fuzzy Set Theory in Medical Diagnosis
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 260-265
%K AA01 AI01 O04
%X They developed systems for diagnosing rheumatologic diseases and pancreatic
disorders. They achieved 94.5 and 100 percent accuracy, respectively.
%A William E. Pracht
%T GISMO: A Visual PRoblem Structuring and Knowledge-Organization Tool
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 265-270
%K AI13 AI08 Witkin Geft AA06
%X discusses the use of a system for displaying effect diagrams on
decision making in a simulated business environment. The tool improved
net income production. The tool provided more assistance to those
who were more analytical than to those who used heuristic reasoning as
measured by the Witkin GEFT.
%A Henri Farreny
%A Henri Prade
%T Default and Inexact Reasoning with Possiblity Degrees
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 270-276
%K O04 AI01 AA06
%X discusses storing for each proposition, a pair consisting of the
probability that it is true
and probability that it is false where these two probabilities do not
necessarily add up to 1. Inference rules have been developed for such
a system including analogs to modus ponens, modus tollens and how to
combine two such ordered pairs applying to the same fact. These have
been applied to an expert system in financial analysis.
%A Chelsea C. White, III
%A Edward A. Sykes
%T A User Preference Guided Approach to Conflict Resolution in
Rule-Based Expert Systems
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 276-278
%K AI01 multiattribute utility theory
%X discusses an application of multiattribute utility theory to
resolve conflicts between rules in an expert system.
%A David Bright
%T Chip Triggers Software Race
%J ComputerWorld
%V 20
%N 30
%D JUL 28, 1986
%P 1+
%K intel 3086 T01 T03 H01 Gold Hill Computers Arity Lucid T02 Hummingbird Franz
%X Gold HIll Computers, Franz, Arity, Lucid, Quintus and Teknowledge have agreed
to port their AI software to the 80386
%A David Bright
%T Voice-activated Writer's Block
%J ComputerWorld
%V 20
%N 30
%D JUL 28, 1986
%P 23+
%K AI05 Kurzweill Victor Zue
%X MIT's Victor Zue says that current voice recognition technology is not
ready to be extended to "complex tasks." They have been able to train
researchers to transcribe unknown sentences from spectrograms with 85%
success. A Votan Survey showed that 87% of office workers require only
45 words to run their typical applications. Votan's add-in boards
can recognized 150 words at a time.
%A David Bright
%T Nestor Software Translates Handwriting to ASCII code
%J ComputerWorld
%V 20
%N 30
%D JUL 28, 1986
%P 23+
%K AI06 Brown University
%X Nestor has commercial software that converts handwriting entereded via
a digitizing tablet into ascii text. First user: a French insurance firm.
The system has been trained to recognize Japanese kanji characters and they will
develop a video system to read handwritten checks.
%A Namir Clement Shammas
%T Turbo Prolog
%J Byte
%D SEP 1986
%V 11
%N 9
%P 293-295
%K T02 H01 AT17
%X another review of Turbo-Prolog
%A Bruce Webster
%T Two Fine Products
%J Byte
%D SEP 1986
%V 11
%N 9
%P 335-347
%K T02 H01 AT17 Turbo-Prolog
%X yet another review of Turbo-Prolog
%A Karen Sorensen
%T Expert Systems Emerging as Real Tools
%J Infoworld
%V 8
%N 16
%P 33
%D APR 21, 1986
%K AI01 AT08
%A Rosemary Hamilton
%T MVS Gets Own Expert System
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 1
%K T03 IBM
%X IBM introduced expert system tools for the MVS operating system similar
to those already introduced for VM. The run-time system is $25,000 per month
while development environment is $35,000 per month.
%A Amy D. Wohl
%T On Writing Keynotes: Try Artificial Intelligence
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 17
%X tongue in cheek article about the "keynote" speech which appears at
many conferences. (Not really about AI)
%A Elisabeth Horwitt
%T Hybrid Net Management Pending
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 19
%K AA08 AI01 AT02 Symbolics Avant-Garde nettworks AA15 H02
%X Avant-Garde Computer is developing an interface to networks to assist in
the management thereof. Soon there will be an expert sytem on a Symbolics
to interface to that to assist the user of the system.
%T Software Notes
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 29+
%K ultrix DEC VAX AT02 T01
%X DEC has announce a supported version of VAX Lisp for Ultrix
%A Jeffrey Tarter
%T Master Programmers: Insights on Style from Four of the Best
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 41+
%K Jeff Gibbons O02 Palladian AA06
%X contains information on Jeff Gibbons, a programmer at Palladian which
does financial expert systems
%T Software and Services
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 76
%K T02 Quintus PC/RT AT02
%X Quintus has ported its Prolog to the IBM PC/RT. It costs $8000.00
%T New Products/Microcomputers
%J ComputerWorld
%D APR 7, 1986
%V 20
%N 14
%P 81-82
%K AT02 AI06
%X ADS has announced a real-time digitizer for use with micros costing between
$15,000 and $25,000
%A David Bright
%T Datacopy Presents Text, Image Scanner for IBM PC Family
%J ComputerWorld
%D APR 28, 1986
%V 20
%N 17
%P 36
%K H02 AT02 AI06
%X For $2950 you can get an integrated text and iamge scanner which can
convert typewritten text to ASCII code. It can be trained to recognize
unlimited numbers of fonts. It can also be used to input 200 x 200 or 300 x 300
dot per inch resolution images.
%T Lisp to Separate Sales, Marketing
%J Electronic News
%P 27
%D APR 14, 1986
%V 32
%N 1597
%K H02 LMI AT11
%X Lisp Machines is separating sales and marketing. Ken Johnson, the former
vice-president of sales and marketing, has left LMI for VG Systems
%A Steven Bruke
%T Englishlike 1-2-3 Interface Shown
%J InfoWorld
%D APR 28, 1986
%P 5
%V 8
%N 17
%K Lotus AI02 H01 AA15
%X Lotus is selling HAL, which allows users to access 1-2-3 using English
commands
%T TI Sells Japan Lisp Computer
%J Electronics
%D JUN 2, 1986
%P 60
%V 59
%N 22
%K GA02 GA01 H02 AT16
%X C. Itoh has agreed to market TI's Lisp Machine
%A Larry Waller
%T Tseng Sees Peril in Hyping of AI
%J Electronics
%D APR 21, 1986
%P 73
%V 59
%N 16
%K Hughes AT06 AI06 AI07
%X Interview with David Y. Tseng, head of the Exploratory
Studies Department at Malibu Research Laboratories.
%T Image Processor Beats 'Real Time'
%J Electronics
%P 54+
%D APR 14, 1986
%V 59
%N 15
%K AI06 AT02 H01 Imaging Technology
%X Imaging Technology's Series 151 will process an image
in 27 milliseconds and offers the user the ability to
select an area to be processed. It interfaces to a PC/AT.
It costs $11,495 with an optional convolution board for
$3,995.
%A A. P. Sage
%A C. C. White, III
%T Ariadne: A Knowledge Based Interactive System for Planning and Decision
Support
%J IEEE Transactions on Systems, Man, Cybernetics
%V SMC-14
%D JAN/FEB 1984
%N 1
%P 48-54
%K AI13
%A R. M. Hunt
%A W. B. Rouse
%T A Fuzzy Rule-Based Model of Human Problem Solving
%J IEEE Transactions on Systems, Man, Cybernetics
%V SMC-14
%D JAN/FEB 1984
%N 1
%P 112-119
%K AI08 AI01 AA21
%X attempt to develop a model of how people diagnose engine performance
%A I. B. Turksen
%A D. D. W. Yao
%T Representations of Connectives in Fuzzy Reasoning: The View Through
Normal Forms
%J IEEE Transactions on Systems, Man, Cybernetics
%V SMC-14
%D JAN/FEB 1984
%N 1
%P 146-151
%K O04
%A W. X. Xie
%A S. D. Bedrosian
%T An Information Measure for Fuzzy Sets
%J IEEE Transactions on Systems, Man, Cybernetics
%V SMC-14
%D JAN/FEB 1984
%N 1
%P 151-157
%K O04
%A S. Miyamoto
%A K. Nakayama
%T Fuzzy Information Retrieval Based on a Fuzzy Pseudothesaurus
%J IEEE Transactions on Systems, Man and Cybernetics
%V SMC-16
%N 2
%D MAR/APR 1986
%P 278-282
%K AA14 O04
%X A fuzzy bibliographic information retrieval based on a fuzzy
thesaurus or on a fuzzy pseudothesaurus is described. A fuzzy thesaurus
consists of two fuzzy relations defined on a set of keywords for the
bibliography. The fuzzy relations are generated based on a fuzzy set model,
which describes association of keyword to its concepts. If the set of
concepts in the fuzzy set model is replaced by the set of documents,
the fuzzy relations are called a pseudothesaurus, which is automatically
generated by using occurrence frequencies of the keywords in the set of
documents. The fuzzy retrieval uses two fuzzy relations in addition,
that is, a fuzzy indexing and a fuzzy inverted file: the latter is the
inverse relation of the former. They are, however, related to different
algorithms for indexing and retrieval, respectively. An algorithm of
ordering retrieved documents according to the values of the fuzzy
thesaurus is proposed. This method of the ordering is optimal in the
sense that one can obtain documents of maximum relevance in a fixed time
interval.
------------------------------
End of AIList Digest
********************
∂09-Oct-86 0449 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #209
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 9 Oct 86 04:49:14 PDT
Date: Thu 9 Oct 1986 00:36-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #209
To: AIList@SRI-STRIPE
AIList Digest Thursday, 9 Oct 1986 Volume 4 : Issue 209
Today's Topics:
Bibliographies - Correction and Future SMU Bibliography Labels &
Recent Kansas Technical Reports & UCLA Technical Reports
----------------------------------------------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Correction and Future SMU Bibliography Labels
[Lawrence Leff at SMU, who provides all those lengthy bibliographies and
article summaries, has sent the following correction for the Subject line
I added to one of the bibliographies. -- KIL]
ai.bib35 was mistitled as references on computer vision/robotics.
This reference list contained articles on such subjects as neural
networks, urban planning, logic programming, and theorem proving as
well as vision/robotics.
In order to prevent this problem in the future, I will entitling the
materials as ai.bibnnxx
where nn is a consecutive number and
xx is C for citations without descriptions
TR for technical reports
AB for citations for citations with descriptions
(annotated bibliographies)
Thus ai.bib40C means the 40th AI list in bibliography format
and the C indicates that we have a bunch of bib format references
without significant commentary.
The nn is unique over all types of bibliographies. Thus, if there
were an ai.bib40C, then there will NOT be an ai.bib40TR or ai.bib40AB.
These designations are actually the file names for the list on my hard disk.
The shell script that wraps up the item for mailing will automatically put
the file name in the subject field. If one of your readers uses this to
designate a file in mail to me, I can thus trivially match their query
against a specific file.
Note that I no longer will be separating out references by subject
matter. The keyword system is much more effective for allowing people
interested in specific subfields of ai to see the articles they find relevant.
Sadly the bib system program "listrefs" is having problems with citations
that contain long abstracts or commentary information. Thus TR and AB
type references will probably cause this program to spec check. I spent
a whole day trying to isolate the problem but have been unsuccessful.
One other self-described bib expert has the same problem. All references
are indexable by "invert".
TR and AB type references will not use bib definition files and thus
are usable with the refer package from AT&T. If I were not to use bib
definition files with C type reference lists, the number of bytes transmitted
for their mailing would triple.
------------------------------
Date: Fri, 5 Sep 86 15:05:40 CDT
From: Glenn Veach <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Recent Kansas Technical Reports
Following is a list of technical reports which have recently
been issued by the department of Computer Science of The
University of Kansas in conjunction with research done in
the department's Artificial Intelligence Laboratory.
Requests for any and all Technical Reports from the Department of
Computer Science and it's various laboratories at The University
of Kansas should be sent to the following address:
Linda Decelles, Office Manager
110 Strong Hall
Department of Computer Science
The University of Kansas
Lawrence, KS 66045
U.S.A.
%A Glenn O. Veach
%T The Belief of Knowledge: Preliminary Report
%I Department of Computer Science, The University of Kansas
%R TR-86-15
%X As various researchers have attempted to present logics which
capture epistemic concepts they have encountered several difficulties.
After surveying the critiques of past efforts we propose a logic which
avoids these same faults. We also closely explore fundamental issues
involved in representing knowledge in ideal and rational agents and
show how the similarities and differences are preserved in the logic
we present. Several examples are given as supporting evidence for our
conclusions. To be published in the proceedings of the 2nd Kansas
Conference: Knowledge-Based Software Development. 12 pp.
%A Glenn O. Veach
%T An Annotated Bibliography of Systems and Theory for Distributed
Artificial Intelligence
%I Department of Computer Science, The University of Kansas
%R TR-86-16
%X This paper summarizes, with extensive comment, the results of an
initial investigation of the work in distributed AI. Some forty-plus
articles representing the major schools of thought and development are
cited and commented upon.
%A Frank M. Brown
%T Semantical Systems for Intensional Logics Based on the Modal
Logic S5+Leib
%I Department of Computer Science, The University of Kansas
%R TR-86-17
%X This paper contains two new results. First it describes how
semantical systems for intensional logics can be represented in
the particular modal logic which captures the notion of logical
truth. In particular, Kripke semantics is developed from this
modal logic. The second result is the development in the modal
logic of a new semantical system for intensional logics called
B-semantics. B-semantics is compared to Kripke semantics and it
is suggested that it is a better system in a number of ways.
------------------------------
Date: Tue, 7 Oct 86 13:32:32 PDT
From: Judea Pearl <judea@LOCUS.UCLA.EDU>
Subject: new Technical Reports
The following technical reports are now available from the
Cognitive Systems Laboratory
Room 4712, Boelter Hall
University of California
Los-Angeles, CA, 90024
or: judea@locus.ucla.edu
←←←←←←←
Pearl, J., ``Bayes and Markov Networks: a Comparison of Two
Graphical Representations of Probabilistic Knowledge,'' Cognitive
Systems Laboratory Technical Report (R-46), September 1986.
ABSTRACT
This paper deals with the task of configuring effective graphical
representation for intervariable dependencies which are embedded
in a probabilistic model. It first uncovers the axiomatic basis
for the probabilistic relation `` x is independent of y , given
z ,'' and offers it as a formal definition for the qualitative
notion of informational dependency. Given an initial set of such
independence relationships, the axioms established permit us to
infer new independencies by non-numeric, logical manipulations.
Using this axiomatic basis, the paper determines those properties
of probabilistic models that can be captured by graphical
representations and compares the characteristics of two such
representations, Markov Networks and Bayes Networks. A Markov
network is an undirected graph where the links represent
symmetrical probabilistic dependencies, while a Bayes network is
a directed acyclic graph where the arrows represent causal
influences or object-property relationships. For each of these
two network types, we establish: 1) a formal semantic of the
dependencies portrayed by the networks, 2) an axiomatic
characterization of the class of dependencies capturable by the
network, 3) a method of constructing the network from either hard
data or expert judgments and 4) a summary of properties relevant
to its use as a knowledge representation scheme in inference
systems.
←←←←←←←
Zukerman, I. & Pearl, J., ``Comprehension-Driven Generation of
Meta-Technical Utterances in Math Tutoring,'' UCLA Computer
Science Department Technical Report CSD-860097 (R-61).
ABSTRACT
A technical discussion often contains conversational expressions
like ``however,'' ``as I have stated before,'' ``next,'' etc.
These expressions, denoted Meta-technical Utterances (MTUs) carry
important information which the listener uses to speed up the
comprehension process. In this research we model the meaning of
MTUs in terms of their anticipated effect on the listener
comprehension, and use these predictions to select MTUs and weave
them into a computer generated discourse. This paradigm was
implemented in a system called FIGMENT, which generates
commentaries on the solution of algebraic equations.
←←←←←←←
Pearl, J., ``Jeffrey's Rule and the Problem of Autonomous
Inference Agents,'' UCLA Cognitive Systems Laboratory Technical
Report (R-62), June 1986, UCLA CSD #860099, June 1986.
ABSTRACT
Jeffrey's rule of belief revision was devised by philosophers to
replace Bayes conditioning in cases where the evidence cannot be
articulated propositionally. This paper shows that unqualified
application of this rule often leads to paradoxical conclusions,
and that to determine whether or not the rule is valid in any
specific case, one must first have topological knowledge about
one's belief structure. However, if such topological knowledge
is, indeed, available, belief updating can be done by traditional
Bayes conditioning; thus, arises the question of whether it is
ever necessary to use Jeffrey's rule in formalizing belief
revision.
←←←←←←←
Pearl, J., ``Distributed Revision of Belief Commitment in Multi-
Hypotheses Interpretation,'' UCLA Computer Science Department
Technical Report CSD-860045 (R-64), June 1986; presented at the
2nd AAAI Workshop on Uncertainty in Artificial Intelligence,
Philadelphia, PA., August 1986.
ABSTRACT
This paper extends the applications of belief-networks models to
include the revision of belief commitments, i.e., the categorical
instantiation of a subset of hypotheses which constitute the most
satisfactory explanation of the evidence at hand. We show that,
in singly-connected networks, the most satisfactory explanation
can be found in linear time by a message-passing algorithm
similar to the one used in belief updating. In multiply-
connected networks, the problem may be exponentially hard but, if
the network is sparse, topological considerations can be used to
render the interpretation task tractable. In general, finding
the most probable combination of hypotheses is no more complex
than computing the degree of belief for any individual
hypothesis.
←←←←←←←
Geffner, H. & Pearl, J., ``A Distributed Approach to Diagnosis,''
UCLA Cognitive Systems Laboratory Technical Report (R-66),
October 1986;
ABSTRACT
The paper describes a distributed scheme for finding the most
likely diagnosis of systems with multiple faults. The scheme
uses the independencies embedded in a system to decompose the
task of finding a best overall interpretation into smaller sub-
tasks of finding the best interpretations for subparts of the
net, then combining them together. This decomposition yields a
globally-optimum diagnosis by local and concurrent computations
using a message-passing algorithm. The proposed scheme offers a
drastic reduction in complexity compared with other
methods: attaining linear time in singly-connected networks and,
at worst, exp ( | cycle-cutset | ) time in multiply-connected
networks.
←←←←←←←
Pearl, J., ``Evidential Reasoning Using Stochastic Simulation of
Causal Models,'' UCLA Cognitive Systems Laboratory Technical
Report (R-68-I), October 1986.
ABSTRACT
Stochastic simulation is a method of computing probabilities by
recording the fraction of time that events occur in a random
series of scenarios generated from some causal model. This paper
presents an efficient, concurrent method of conducting the
simulation which guarantees that all generated scenarios will be
consistent with the observed data. It is shown that the
simulation can be performed by purely local computations,
involving products of parameters given
with the initial specification of the model. Thus, the method
proposed renders stochastic simulation a powerful technique of
coherent inferencing, especially suited for tasks involving
complex, non-decomposable models where ``ballpark'' estimates of
probabilities will suffice.
←←←←←←←
Pearl, J., ``Legitimizing Causal Reasoning in Default Logics''
(note), UCLA Cognitive Systems Laboratory Technical Report (R-
69), September 1986.
ABSTRACT
The purpose of this note is to draw attention to certain aspects
of causal reasoning which are pervasive in ordinary discourse
yet, based on the author's scan of the literature, have not
received due treatment by logical formalisms of common-sense
reasoning. In a nutshell, it appears that almost every default
rule falls into one of two categories: expectation-evoking or
explanation-evoking. The former describes association among
events in the outside world (e.g., Fire is typically accompanied
by smoke.); the latter describes how we reason about the world
(e.g., Smoke normally suggests fire.). This distinction is
clearly and reliably recognized by all people and serves as an
indispensible tool for controlling the invocation of new default
rules. This note questions the ability of formal systems to
reflect common-sense inferences without acknowledging such
distinction and outlines a way in which the flow of causation can
be summoned within the formal framework of default logic.
←←←←←←←
Dechter, R. & Pearl, J., ``The Cycle-Cutset Method for Improving
Search Performance in AI Applications,'' UCLA Cognitive Systems
Laboratory Technical Report (R-67); submitted to the 3rd IEEE
Conference on Artificial Intelligence Applications.
ABSTRACT
This paper introduces a new way of improving search performance by
exploiting an efficient method available for solving tree-structured
problems. The scheme is based on the following observation: If, in
the course of a backtrack search, we remove from the constraint-graph
the nodes corresponding to instantiated variables and find that the
remaining subgraph is a tree, then the rest of the search can be
completed in linear time. Thus, rather than continue the search
blindly, we invoke a tree-searching algorithm tailored to the topology
of the remaining subproblem. The paper presents this method in detail
and evaluates its merit both theoretically and experimentally.
------------------------------
End of AIList Digest
********************
∂09-Oct-86 0739 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #210
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 9 Oct 86 07:38:51 PDT
Date: Thu 9 Oct 1986 00:44-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #210
To: AIList@SRI-STRIPE
AIList Digest Thursday, 9 Oct 1986 Volume 4 : Issue 210
Today's Topics:
Conferences - Expert Systems in Government &
IEEE Systems, Man and Cybernetics
----------------------------------------------------------------------
Date: Wed, 01 Oct 86 13:03:52 -0500
From: Duke Briscoe <duke@mitre.ARPA>
Subject: Final Program for Expert Systems in Government Conference
The Second Annual Expert Systems in Government Conference, sponsored by
the Mitre Corporation and the IEEE Computer Society in association with
the AIAA National Capital Section will be held October 20-24, 1986 at
the Tyson's Westpark Hotel in McLean, VA. There is still time to register,
but late registration charges will be added after October 6.
October 20-21 Tutorials
Monday, October 20
Full Day Tutorial: Advanced Topics in Expert Systems
by Kamran Parsaye, IntelligenceWare, Inc.
Morning Tutorial: Knowledge Base Design for Rule Based Expert Systems
by Casimir Kulikowski, Rutgers University
Afternoon Tutorial: Knowledge Base Acquisition and Refinement
by Casimir Kulikowski, Rutgers University
Tuesday, October 21
Morning Tutorial: Distributed Artificial Intelligence
by Barry Silverman, George Washington University
Morning Tutorial: Introduction to Common Lisp
by Roy Harkow, Gold Hill
Afternoon Tutorial: Lisp for Advanced Users
by Roy Harkow, Gold Hill
Afternoon Tutorial: The Management of Expert System Development
by Nancy Martin, Softpert Systems
October 22-24 Technical Program
Wednesday, October 22
9 - 10:30
Conference Chairman's Welcome
Keynote Address: Douglas Lenat, MCC
Program Agenda
11am - 12pm
Track A: Military Applications I
K. Michels, J. Burger; Missile and Space Mission Determination
Major R. Bahnij, Major S. Cross;
A Fighter Pilot's Intelligent Aide for Tactical Mission Planning
Track B: Systems Engineering
R. Entner, D. Tosh; Expert Systems Architecture for Battle Management
H. Hertz; An Attribute Referenced Production System
B. Silverman; Facility Advisor: A Distributed Expert System Testbed for
Spacecraft Ground Facilities
12pm - 1pm Lunch, Distinguished Guest Address,
Harry Pople, University of Pittsburgh
1pm - 2:30pm
Track A: Knowledge Acquisition
G. Loberg, G. Powell
Acquiring Expertise in Operational Planning: A Beginning
J. Boose, J. Bradshaw; NeoETS: Capturing Expert System Knowledge
K. Kitto, J. Boose; Heuristics for Expertise Transfer
M. Chignell; The Use of Ranking and Scaling in Knowledge Acquisition
Track B: Expert Systems in the Nuclear Industry
D. Sebo et al.; An Expert System for USNRC Emergency Response
D. Corsberg; An Object-Oriented Alarm Filtering System
J. Jenkins, W. Nelson; Expert Systems and Accident Management
3pm - 5pm
Track A: Expert Systems Applications I
W. Vera, R. Bolczac; AI Techniques Applied to Claims Processing
R. Tong, et al.; An Object-Oriented System for Information Retrieval
D. Niyogi, S. Srihari; A Knowledge-based System for Document Understanding
R. France, E. Fox; Knowledge Representation in Coder
Track B: Diagnosis and Fault Analysis
M. Taie, S. Srihari; Device Modeling for Fault Diagnosis
Z. Xiang, S. Srihari; Diagnosis Using Multi-level Reasoning
B. Dixon; A Lisp-Based Fault Tree Development Environment
Panel Track:
1pm - 5pm Management of Uncertainty in Expert Systems
Chair: Ronald Yager, IONA College
Participants: Lofte Zadeh, UC Berkeley
Piero Bonnisone, G.E.
Laveen Kanal, University of Maryland
Peter Cheeseman, NASA-Ames Research Center
Prakash Shenoy, University of Kansas
Thursday, October 23
9am - 10:30am
Track A: Knowledge Acquistion and Applications
E. Tello; DIPOLE - An Integrated AI Architecture
H. Chung; Experimental Evaluation of Knowledge Acquisition Methods
H. Gabler; IGOR - An Expert System for Crash Trauma Assessment
K. Chhabra, K. Karna; Expert Systems in Electronic Filings
Track B: Aerospace Applications of Expert Systems
D. Zoch; A Real-time Production System for Telemetry Analysis
J. Schuetzle; A Mission Operations Planning Assistant
D. Brauer, P. Roach; Ada Knowledge Based Systems
F. Rook, T. Rubin; An Expert System for Conducting a
Sattelite Stationkeeping Maneuver
Panel Track: Star Wars and AI
Chair: John Quilty, Mitre Corp.
Participants: Brian P. McCune, Advanced Decision Systems
Lance A. Miller, IBM
Edward C. Taylor, TRW
11am - 12pm
Plenary Address:
B. Chandrasekaran; The Future of Knowledge Acquisition
12pm - 1pm Lunch
1pm - 2:30pm
Track A: Inexact and Statistical Measures
K. Lecot; Logic Programs with Uncertainties
N. Lee; Fuzzy Inference Engines in Prolog/P-Shell
J. Blumberg; Statistical Entropy as a Measure of Diagnostic Uncertainty
Track B: High Level Tools for Expert Systems
S. Shum, J.Davis; Use of CSRL for Diagnostic Expert Systems
E. Dudzinski, J. Brink; CSRL: From Laboratory to Industry
D. Herman, J. Josephson, R. Hartung; Use of the DSPL
for the Design of a Mission Planning Assistant
J. Josephson, B. Punch, M. Tanner; PEIRCE: Design Considerations
for a Tool for Abductive Assembly for Best Explanation
Panel Track: Application of AI in Telecommunications
Chair: Shri Goyal, GTE Labs
Participants: Susan Conary, Clarkson University
Richard Gilbert, IBM Watson Research Center
Raymond Hanson, Telenet Communications
Edward Walker, BBN
Richard Wolfe, ATT Bell Labs
3pm - 5pm
Track A: Expert System Implementations
S. Post; Simultaneous Evaluation of Rules to Find Most Likely Solutions
L. Fu; An Implementation of an Expert System that Learns
R. Frail, R. Freedman; OPGEN Revisited
R. Ahad, A. Basu; Explanation in an Expert System
Track B: Expert System Applications II
R. Holt; An Expert System for Finite Element Modeling
A. Courtemanche; A Rule-based System for Sonar Data Analysis
F. Merrem; A Weather Forecasting Expert System
Panel Track: Command and Control Expert Systems
Chair: Andrew Sage, George Mason University
Participants: Peter Bonasso, Mitre
Stephen Andriole, International Information Systems
Paul Lehner, PAR
Leonard Adelman, Government Systems Corporation
Walter Beam, George Mason University
Jude Franklin, PRC
Friday, October 24
9am - 12pm: Expert Systems in the Classified Community
The community building expert systems for
classified applications is unsure of the value and feasibility of some
form of communication within the community. This will be a session
consisting of discussions and working sessions, as appropriate, to
explore these issues in some depth for the first time, and to make
recommendations for future directions for the classified community.
9am - 10:30am
Track A: Military Applications
Bonasso, Benoit, et al.;
An Experiment in Cooperating Expert Systems for Command and Control
J. Baylog; An Intelligent System for Underwater Tracking
J. Neal et al.; An Expert Advisor on Tactical Support Jammer Configuration
Track B: Expert Systems in the Software Lifecycle
D. Rolston; An Expert System for Reducing Software Maintenance Costs
M. Rousseau, M. Kutzik; A Software Acquisition Consultant
R. Hobbs, P. Gorman; Extraction of Data System Requirements
Panel Track: Next Generation Expert System Shells
Chair: Art Murray, George Washington University
Participants: Joseph Fox, Software A&E
Barry Silverman, George Washington University
Chuck Williams, Inference
John Lewis, Martin Marietta Research Labs
11am - 12pm
Track A: Spacecraft Applications
D. Rosenthal; Transformation of Scientific Objectives
into Spacecraft Activities
M. Hamilton et al.; A Spacecraft Control Anomaly Resolution Expert System
Track B: Parallel Architectures
L. Sokol, D. Briscoe; Object-Oriented Simulation on a
Shared Memory Parallel Architecture
J. Gilmer; Parallelism Issues in the CORBAN C2I Representation
Panel Track: Government Funding of Expert Systems
Chair: Commander Allen Sears, DARPA
Participants: Randall Shumaker, and others
Conference Chairman: Kamal Karna
Unclassified Program Chairman: Kamran Parsaye
Classified Program Chairman: Richard Martin
Panels Chairman: Barry Silverman
Tutorials Chairman: Steven Oxman
Registration information can be requested from
Ms. Gerrie Katz
IEEE Computer Society
1730 Massachusetts Ave. N.W.
Washington, D.C. 20036-1903
(202) 371-0101
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Conference - IEEE Systems, Man and Cybernetics
1986 IEEE International Conference on Systems, Man and Cybernetics, AI papers
October 14-17, 1986 Pierremont Plaza Hotel, Atlanta, GA 30308
Wednesday October 15 8AM - 9:40 AM
On Neural-Model Based Cognitive Theory and Engineering: Introduction
N. DeClaris
Matrix and Convolution Models of Brain Organization in Cognition
K. H. Pribram
Explorations in Brain Style Computations
D. E. Rumelhart
Representing and Transforfming Recursive Objects in a Neural Network or "Trees
Do Grow on Boltzmann Machines
D. S. touretzky
Competition-Based Connectionist Models of Associative Memory
J. A. Reggia, S. Millheim, A. Freeman
A Parallel Network that Lears to Read Aloud
T. J. Sejnowski
A Theory of Dialogue Structures to HElp Manage Human Computer Interaction
D. L. Sanford, J. W. Roach
A User Interface for a Knowledge-Based Planning and Scheduling System
A. M. Mulvehill
Orthonormal Decompositions in Adaptive Systems
L. H. Sibul
An "Evolving Frame" Approach to Learning with Application to Adaptive
Navigation
R. J. P. de Figueredo, K. H. Wang
Approaches to Machine Learning with Genetic Algorithms
J. Grefenstette, C. B. Pettey
Use of Voice Recognition for Control of a Robotic Welding Workcell
J. K. Watson, D. M. Todd, C. S. Jones
A Knowledge Based System for the CST Diagnosis
C. Hernandez, A. Alonso, Z. Wu
A Qualitative Model of Human Interaction with Complex Dynamic Systems
R. A. Hess
Evaluating Natural Language Interfaces to Expert Systems
R. M. Weischedel
Expert System Metrics
S. Kaisler
Global Issues in Evaluation of Expert Systems
N. E. Lane
A Scenario-Based Test Tool for Examining Expert Systems
E. T. Scambos
10:AM - 11:40AM
A Comparison of Some Inductive Learning Methodologies
D. W. Patterson
INduction of Finite Automata by Genetic Algorithms
H. Y. Zhou, J. J. Grefenstette
NUNS: A Machine Intelligence Concept for Learning Object Class Domains
B. V. Dasarathy
Toward a Paradigm for Automated Understanding of 3-D Medical Images
E. M.Stokely, T. L. Faber
Development of an Expert System for Interpreting Medical Images
N. F. Ezquerra, E. V. Garcia, E. G. DePuey, W. L. Robbins
Edge Enhancement in Medical Images by 3D Processing
J. E. Boyd, R. A. STein
Scheme for Three Dimensional Reconstruction of Surfaces from CT and MRI Images
of the Human Body
R. Tello, R. W. Mann, D. Rowell
Using the Walsh-Hadamard Phase Spectrum to Generate Cardiac Activation Movies-
A Feasibility Study
H. Li
Things We Learned by Making Expert Systems to Give Installation Advice for
UNIX 4.2BSD and to HElp Connect a Terminal to a Computer
A. T. Bahill, E. Senn, P. Harris
A Heuristic Search/Information Theory Approach to Near Optimal Diagnostic
Test Sequencing
K. R. Pattipati, J. C. Deckert, M. G. Alexandris
An Expert System for the Estimation of Parameter Values of Water Quality
MOdel
W. J. Chen
Application of an Expert System to Error Detection and Correction in a
Speech Recognition System
K. H. Loken-Kim, M. Joost, E. Fisher
1PM - 1:50PM
Topic: Holonomic Brain Theory and The Concept of Information
Karl H. Pribram
2-3:40PM
An Interactive Machine Intelligence Development System for Generalized 2-D
Shapes Recognition
B. V. Dasarathy
Modelling of Skilled Behaviour and Learning
T. Bosser
Design of a User INterface for Automated Knowledge Acquisition
A. S. Wolff, B. L. Hutchins, E. L. Cochran, J. R. Allard, P. J. Ludlow
OFMspert: An Operator Function Model Expert System
C. M. Mitchell, K. S. Rubin, T. Govindaraj
An Adaptive Medical Information System
N. DeClaris
Intermediate Level Heuristics for Road-finding Algorithms
S. Vasudevan, R. L. cannon, J. C. Bezdek, W. C. Cameron
Computer-Disambiguation of Multi-Character Key Text Entry: An Adaptive Design
Approach
S. H. Levine, S. Minneman, C. Getschow, C. Goodenough-Trepaigner
An INteractive and Data Adaptive Spectrum Analysis System
C. H. Chen, A. H. Costa
On How Two-Action Ergodic Learning Automata can Utilize Apriori Information
B. J. Oommen
VLSI Implementation of an Iterative Image Restoration Algorithm
A. K. Katsaggelos, P. R. Kumar, M. Samanthan
Development of Automated Health Testing and Services System via Fuzzy Reasoning
E. Tazaki, Y. Hayashi, K. Yoshida, A. Koiwa
Knowledge-Based Interaction Tools
R. Neches
Bibliographic Information Retrieval Systems: Using Ai Techniques to Improve
Cognitive Compatibility and Performance
P. J. Smith, D. A. Krawczak, S. J. Shute, M. H. Chignell, M. Sater
4PM-5:40PM
An Evidential Approach to Robot Sensory Fusion
J. H. Graham
Retinal Ganglion Cell Processing of Spatial Information in Cats
J. Troy, J. G. Robson, C. Enroth--Cugel
Texture Discriminants from Spatial Frequency Channels
G. A. Wright, M. E. Jernigan
Contextual Filters for Image Processing
G. F. McLean, M. E. Jernigan
Using Cognitive Psychology Techniques for Knowledge Acquisition
A. H. Silva, D. C. Regan
Transfer of kNowledge from Domain Expert to Expert System: Experience
Gained form JAMEX
J. G. Neal, D. J. Funke
Metholdological Tools for kNowledg eAcquisition
K. L. Kessel
Downloading the Expert: Efficient Knowledge Acquisition for Expert Systems
J. H. Lind
Integration of Phenomenological and Fundamental Knowledge in Diagnostic
Expert Systems
L. M. Fu
Integrating Knowledge Acquisitions Methods
P. Dey, K. D. Reilly
Multi Processing of Logic Programs
G. J. Li, B. W. Wah
A Model for Parallel Processing of Production Systems
D. I. MOldovan
Several IMplementaitons of Prolog, the Microarchitecture Perspective
Y. N. Patt
A Parallel Symbol-Matching Co-procesor for Rule Processing Sytems
D. F. Newport, G. T. Alley, W. L. Bryan, R. O. Eason, D. W. Bouldin
The Connection Machine Architecture
W. D. Hillis, B. S. Kahle
Thursday, October 16th 8AM-9:40PM
Transformation Invariance Using HIgh ORder Correlations in Neural Net
Architectures
T. P. Maxwell, C. L. Giles, Y. C. Lee, H. H. Chen
A Neural Network Digit Recognizer
D. J. Burr
Computational Properties of a Neural Net with a Triangular Lattice Structure
and a Traveling Activity Peak
R. Eckmiller
Fuzzy Multiobjective Mathematical Programming's Application to Cost Benefit
Analysis
L. Xu
Evaluation of the Cause Diagnosis Function of a Prototype Fuzzy-Logic-Based
Knowledge System for Financial Ratio Analysis Analysis
F. J. Ganoe, T. H. Whalen, C. D. Tabor
Knowledge INtegration inFinancial Expert Systems
P. D. Crigler, P. Dey
Pyramid and Quadtree ARchitectures in Point Pattern Segmentation and Boundary
Extraction
B. G. Mobasseri
Causality in Pattern Recognition
S. Vishnubhatla
Network Biovisitrons for HIgh-Level Pattern Recognition
D. M. Clark, F. Vaziri
Giving Advice as Extemporaneous Elaboration
M. A. Bienkowski
Dynamics of Man-Machine Interaction in a Conversational Advisory System
A. V. Gershman, T. Wolf
10AM -11:40AM
A Method for Medial Line Transformation
E. Salari
An Alternative IMplematnationStrategy for a Variety of Image Processing
Algorithms
R. Saper, M. E. Jernigan
A Semantic Approach to Image Segmentation
S. Basu
A SkeletonizingAlgorithm with Improved Isotropy
D. J. Healy
The Application of Artificial INtelligence to Manufacturing control
P. J. O'Grady, K. H. Lee, M. Brightman
An Expert System for designof Flexible Manufacturing Systems
D. E. Brown, G. anandalingam
A Derivational Approach to Plan Refinement for Advice Giving
R. Turner
TheRole of Plan Recognition in Design of an INtelligent User INterface
C. A. Broverman, K. E. Khuff, V. Lesser
Discussant
J. L. Koldner
Voice INput in Real time Decision Making
M. G. Forren, C. M. Mitchell
2:00PM-3:40PM
The Use of Artificial Intelligence in CAI for Science Education
G. S. Owen
Design of an Intelligent Tutoring System (ITS) for Aircraft Recognition
D. R. Polwell, A. E. Andrews
A Rule-Based Bayesian ARchitecture for Monitoring Learnign Process in ICAI
Systems
T. R. Sivasankaran T. Bui
A Knowledge Based System for Transit Planning
A. Mallick, A. Boularas, F. DiCesare
On the Acquisition and Processingo f Uncertian Information in Rule-Based
Decision Support Systems
S. Gaglio, R. Minciardi, P. P. Puliafito
Lambertian Spheres Parameter Estimation from a Single 2-D Image
B. Cernuschi-Frias D. B. Cooper
A Solution to the STereo Correspondence Problem using Disparity Smoothness
Constraints
N. H. Kim, A. C. Bovik
Registration of Serial Sectional Images for 3-D Reconstruction
M. Sun, C. C. li
Rotation-Invariant contour DP Matching Method for 3D Object Recognition
H. Yamada, M. Hospital, T. Kasvand
CAD Based 3-D Models for Computer Vision
B. Bhanu, C. C. Ho, S. Lee
A Rule-Based System for Forming Sequence Design for Multistage Cold Forging
K. Sevenler, T. Altan, P. S. Raghupathi, R. A. Miller
Automated Forging Design
A. Tang
Geometry Representation to Aid Autoamted Design on Blocker Forging
K. R. Vemuri
Intelligent Computing ("The Sixth Generation"): A Japanese Initiative
R. E. Chapman
The Influenceof the United States and Japan on Knowledge Systems of the Future
B. A. Galler
Knowledgeis Structured in Conscioiusness
T. N. Scott, D. D. Scott
Knowledge Science-Towards the Prosthetic Brain
M. L. Shaw
Socio-Economic Foundations of Knowledge Science
B. R. Gaines
4:00 PM - 5:40PM
Fuzzy and Vector Measurement of Workload
N. Moray, P. Eisen, G. Greco, E.Krushelnycky, L. Money, B. Muir, I. Noy,
F. Shein, B. Turksen, L. Waldon
Toward an Empirically-based Process Model for a Machine Programming Tutor
D. Littman, E. Soloway
An Intelligent Tutor for Thinking about Programming
J. Bonar
An Expert System for Partitioning and Allocating Algorithms
M. M. Jamali, G. A. Julien, S. L. Ahmad
A Knowledge INcreasing Model of Image Understanding
G. Tascini, P. Puliti
An Artificial Intelligence Approach for Robot-Vision in Assembly Applications
Environment
K. Ouriachi, M. Bourton
Visible Surface Reconstruction under a Minimax Criterion
C. Chu, A. C. Bovak
A Measurement of Image Concordance Using Replacment Rules
R. Lauzzana
High-Level Vision Using a Rule-Based Language
M. Conlin
An Expert Consultant for Manufacturing Process Selection
A. Kar
A Knowledge Representation Scheme for Processes in an Automated Manufacturing
Environment
S. R. Ray
Making Scheduling Desisions in an F. M. S. Using the State-Operator Framework
in A. I.
S. de, A. Lee
Intelligent Exception Processing for Manufacturing Workstation Control
F. DiCesare, A. Desrochers, G. Goldbergen
Knowledge of Knowledge and the Comptuer
J. A. Wojciechowski
Paradigm Chagne in the Sixth Generation Approach
W. H. C. Simmonds
Educational Implications of Knowledge Science
P. Zorkoczy
From Brain Theory to the Sixth Generation Computer
M. A. Arbib
Friday, October 17 8:00 AM - 9:40 AM
Development of an Intelligent Tutoring System
K. Kawamura, J. R. Bourne, C. Kinzer, L. Cozean, N. Myasaka, M. Inui
CALEB: An Intelligent Second Language Tutor
P. Cunningham, T. Iberall, B. Woolf
A Methodology for Development of a Computer-Aided Instruction Program in
Complex, Dynamic Systems
J. L. Fath, C. M. MItchell, T. Govindaraj
Matching Strategies in Error Diagnosis: A Statistics Tutoring Aid
M. M. Sebrechts, L. J. Schooler, L. LaClaire
Using Prolog for Signal Flow Graph Reduction
C. P. Jobling, P. Grant
A Self-Organizing Soft Clustering Algorithm
M. A. Ismail
A Modified Fisher Criterion for Feature Extraction
A. Atiya
A Model of Human Kanji Character Recognition
K. Yokosawa, M. Umeda, E. Yodogawa
Efficient Recognition of Omni-Font Characters using Models of Human
Pattern Perception
D. A. Kerrick, A. C. Bovik
Printed Character Recognition Using an Artificial Visual System
J. M. Coggins, J. T. Poole
Multiobjective INtelligent Computer Aided Design
E. A. Sykes, C. C. White
Knowledge Engineering for Interactive Tactical Planning: A Tested Approach
with General Purpose Potential
S. J. Andriole
ESP- A Knowledge-Aided Design Tool
J. F. King, E. Hushebeck
A Study of Expert Decision Making in Design Processes
R. M. Cohen, J. H. May, H. E. Pople
An Intelligent Design Aid for Large Scale Systems with Quantity Discount
Pricing
A. R. Spillane, D. E. Brown
10AM - 1140AM
NeoETS: Interactive Expertise Transfer for Knowledge-Based Systems
J. H. Boose, J. M. Bradshaw
PCS: A Knowledge-Based Interactive System for Group Problem Solving
M. L. Shaw
Cognitive Models of Human-Computer INteraction in Distributed Systems
B. R. Gaines
The Use of Expert Systems to Reduce Software Specification Errors
S. B. Ahmed, K. Reside
Structure Analysis for Gray Level Pictures on a Mesh Connected Computer
J. El Mesbahi, J. S. Cherkaoui
Pattern Classification on the Cartesian Join System: A General Tool for
Featue Selection
M. Ichino
Texture Discrimination using a Model of the Visual Cortex
M. Clark, A. C. Bovik
Surface Orientation from Texture
J. M. Coggins, A. K. Jain
Classificationof Surface Defects on Wood Boards
A. J. Koivo, C. Kim
ADEPT: An Expert System for Finite Element Modeling
R. H. HoltU. Narayana
KADD: An Environment for Interactive Knowledge Aided Display Design
P. R. Frey, B. J. Widerholt
3PM - 3:40PM
Assigning Weights and Ranking Information Importance in an Object Identification
Task
D. M. Allen
Third Generation Expert Systems
J. H. Murphy, S. C. Chay, M. M. Downs
Reasoning with Comparative Uncertainty
B. K. Moore
On a Blackboard Architecture for an Object-Oriented Production System
D. Doty, R. Wachter
Pattern Analysis of N-dimensionial Digital Images
E. Khalimsky
------------------------------
End of AIList Digest
********************
∂09-Oct-86 2304 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #211
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 9 Oct 86 23:03:38 PDT
Date: Thu 9 Oct 1986 20:36-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #211
To: AIList@SRI-STRIPE
AIList Digest Friday, 10 Oct 1986 Volume 4 : Issue 211
Today's Topics:
Queries - Line-Drawing Recognition & Cognitive Neuroscience,
Schools - Cognitive Science at SUNY,
AI Tools - XILOG & Public-Domain Prolog,
Review - Canadian Artificial Intelligence,
Logic Programming - Prolog Multiprocessors Book,
Learning - Multilayer Connectionist Learning Dissertation
----------------------------------------------------------------------
Date: 9 Oct 86 07:52:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: request for references on drawings
I'd appreciate getting references to any work on automatic
comparison or classification of drawings, especially technical
drawings and blueprints. For instance, a system which, when
presented with a blueprint, can recognize it as a left-handed
widget, etc. Please send replies directly to me - thanks.
John Cugini <Cugini@NBS-VMS>
------------------------------
Date: Mon, 6 Oct 86 13:17:40 edt
From: klahr@nyu-csd2.arpa (Phillip Klahr)
Subject: Cognitive Neuroscience
For my Neuroscience qualifying exam, I am looking for articles,
books, or reviews that discuss the interface/contribution of AI research on
vision and memory to "Cognitive Neuroscience". By Cognitive Neuroscience, I
mean the study of theories and methods by which the different parts of the
brain go about processing information, such as vision and memory. To give you
an idea of "ancient works" I am starting with, I am already looking at:
Wiener's "Cybernetics", von Neumann's "The Computer and the Brain",
Rosenblatt's "Principles of Neurodynamics", Arbib's "Metaphorical Brain", and
Hebb's "The Organization of Behavior".
Some of the neurophysiology work I am looking at already includes work by
Mortimer Mishkin and Larry Squire on memory in the monkey.
Any pertinent references you can think of will be very much appreciated, and,
if there is any interest, I will post a summary of any responses I get.
Thank you very much.
Phillip Klahr Albert Einstein College of Medicine
klahr@NYU-CSD2.ARPA UUCP: {allegra, seismo, ihnp4} !cmcl2!csd2!klahr
------------------------------
Date: Mon, 29 Sep 86 10:46:55 EDT
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: Cognitive Science Schools
In article <8609221503.AA15901@mitre.ARPA> schwamb@MITRE.ARPA writes:
>Well, now that some folks have commented on the best AI schools in
>the country, could we also hear about the best Cognitive Science
>programs? Cog Sci has been providing a lot of fuel for thought to
>the AI community and I'd like to know where one might specialize
>in this.
>
>Thanks, Karl (schwamb@mitre)
The SUNY Buffalo Graduate Group in Cognitive Science was formed to
facilitate cognitive-science research at SUNY Buffalo. Its activities
have focused on language-related issues and knowledge representation.
These two areas are well-represented at SUNY Buffalo by the research
interests of faculty and graduate students in the Group.
The Group draws its membership primarily from the Departments of
Computer Science, Linguistics, Philosophy, Psychology, and Communicative
Disorders, with many faculty from other departments (e.g., Geography,
Education) involved on a more informal basis. A current research project
on deixis in narrative is being undertaken by a research subgroup.
While the Group does not offer any degrees by itself, a Cognitive
Science "focus" in a Ph.D. program in one of the participating
disciplines is available.
There is also a Graduate Group in Vision.
For further details, see AI Magazine, Summer 1986, or contact:
William J. Rapaport
Assistant Professor of Computer Science
Co-Director, Graduate Group in Cognitive Science
Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260
(716) 636-3193, 3180
uucp: ..!{allegra,decvax,watmath,rocksanne}!sunybcs!rapaport
csnet: rapaport@buffalo.csnet
bitnet: rapaport@sunybcs.bitnet
------------------------------
Date: 1 Oct 86 20:02:38 GMT
From: mcvax!unido!ecrcvax!bruno@seismo.css.gov (Bruno Poterie)
Subject: XILOG
Well, I know of at least one Prolog system on PC/AT which is:
- fully C&M compatible,
to the exception of the top-level mode
(consults by default (terms ended with a dot),
executes on request (terms ended with a question mark))
- all defined C&M predicates,
i/o, program manipulation, term scanning & construction,
integer and float arithmetics, ...
plus the following features:
- full window, semi-graphics & color management
- modularity for debugging, program handling, etc..
( but *no* separate dictionaries)
through a hierarchy of "zone"
- on-line precise help
- on-line clause editor
- complete typage mechanism, allowing full object definition,
constraint checking, etc...
- functional mechanism, allowing each call to return any prolog term
as a return value through an accumulator (bactracked/trailed)
(the arithmetic is implemented, using this mechanism,
and you may extend it as you wants)
- non-bactrackable global cells and arrays
- backtracking arrays, with functional notation and access
- access to MSDOS
- sound system
and some others less (sic) important goodies, like a debugger based
on the Box model, etc...
oh, I forgot:
under development, and rather advanced by now, are:
- an incremental compiler to native code with incremental linking
(with full integration with the interpreter, of course)
- an interface to C programs
- a toolkit for development of applications, with an utilities library
- and maybe a message sending mechanism (but I'm not sure for it)
The name of this system is:
XILOG
and it is made and distributed by (the Research Center of) BULL,
the biggest french computer compagny.
if interested, contact:
CEDIAG
BULL
68, route de Versailles
F-78430 Louveciennes
FRANCE
or:
Dominique Sciamma
(same address)
don't fear, they do speak english there! :-)
P.S.: I precise that I have no commercial interest at all in this product,
but I really think that this XILOG is the best Prolog for micro I ever met.
================================================================================
Bruno Poterie # ... une vie, c'est bien peu, compare' a un chat ...
ECRC GmbH # tel: (49)89/92699-161
Arabellastrasse 17 # Tx: 5 216 910
D-8000 MUNICH 90 # mcvax!unido!ecrcvax!bruno
West Germany # bruno%ecrcvax.UUCP@Germany.CSNET
================================================================================
------------------------------
Date: 8 Oct 86 20:26:15 GMT
From: ucdavis!ucrmath!hope!fiore@ucbvax.Berkeley.EDU (David Fiore)
Subject: Re: pd prolog
> Xref: ucbvax net.micro:451 net.micro.pc:821 net.ai:91
>
> Does anyone have the public domain prolog package discussed in this month's
> BYTE magazine?
>
> John E. Jacobsen
> University of Wisconsin -- Madison Academic Computing Center
I have a copy of pdprolog here with me. It is the educational version.
I don't know if that is the one described in BYTE as I haven't read that
magazine lately.
||
|| David Fiore, University of California at Riverside.
=============
|| Slow mail : 1326 Wheaton Way
|| Riverside, Ca. 92507
|| E-Mail
|| UseNet : ...!ucdavis!ucrmath!hope!fiore
|| BITNET : consult@ucrvms
Have another day!
"...and at warp eight, we're going nowhere mighty fast"
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Canadian Artificial Intelligence/ September 1986
Summary No. 9
Report on current budget and increase in dues
The Dalhousie Business School got a Sperry Explorer Lisp Machine
and a copy of KEE. They are developing a system to manage foreign
debts and plan an estimator for R&D projects, intelligent computer
aided instruction and auditing.
Xerox Canada has set up an AI support work
Logicware has been acquired by the Nexa Group
British Columbia Advanced Systems Institute will be set up to do
research on AI, robotics, microelectronics.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Two assessments on the Japanese Fifth Generation project:
ICOT is developing AI systems for fishing fleets, train control,
microchip design, natural language transition. There are 600 researchers
working on fifth generation projects and 600 on robotics.
1986-1988 funding is 102 billion yen and 1982-92 funding is 288 billion.
The English to Japanese system will require post-editing and applies
standard techniques.
The Japanese have abandoned 'Delta', their parallel inference engine
is 'gathering dust' They alledgedly threw 'hardware engineers' into
a Prolog environment for which they 'had no background or interest'
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Report on Natural Language Understanding Research at University of
Toronto
Reviews of Bertholt Klaus Paul Horn's "Robot Vision"
Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986
Conference
------------------------------
Date: Tue, 7 Oct 86 16:08:19 EST
From: munnari!nswitgould.oz!michaelw@seismo.css.gov
Subject: Book - Prolog Multiprocessors
A book is soon to appear, by Michael J. Wise, entitled "Prolog
Multiprocessors". It is being published by Prentice-Hall (Australia).
In a nutshell, the book examines the execution of Prolog on a
multiprocessor.
Starting from a survey of some current multiprocessor
architectures, and a review of what is arguably the most influential
counter-proposal - the "data-flow" model, a model is proposed for
executing Prolog on a multiprocessor. Along with the model goes a
language based on Prolog. The model and the language are called
EPILOG. EPILOG employs both AND and OR parallelism. Results are then
reported for the simulated execution of some Prolog programs rewritten
in the EPILOG language. The book concludes with an extensive survey
of other multiprocessor implementations of Prolog.
The book will be available in Australia from mid November, and in
US/UK/Europe roughly eight weeks later. A list of the Chapter
headings follows. A more detailed list can be obtained from your
local P-H representative, or by e-mailing to me directly.
TABLE OF CONTENTS
Foreword by J. Alan Robinson
Preface
1. Parallel Computation and the Data-Flow Alternative
2. Informal Introduction to Prolog
3. Data-Flow Problems and a Prolog Solution
4. EPILOG Language and Model
5. Architectures for EPILOG
6. Experimenting with EPILOG Architectures - Results and Some
Conclusions
7. Related Work
Appendix 1 Data-Flow Research - the First Generation
Appendix 2 EBNF Specification for EPILOG
Appendix 3 EPILOG Test Programs
Appendix 4 Table of Results
------------------------------
Date: Thu, 9 Oct 86 10:21:18 EDT
From: "Charles W. Anderson" <cwa0%gte-labs.csnet@CSNET-RELAY.ARPA>
Subject: Dissertation - Multilayer Connectionist Learning
The following is the abstract from my Ph.D. dissertation
completed in August, 1986, at the University of Massachusetts, Amherst.
Members of my committee are Andrew Barto, Michael Arbib, Paul Utgoff,
and William Kilmer. I welcome all comments and questions.
Chuck Anderson
GTE Laboratories Inc.
40 Sylvan Road
Waltham, MA 02254
617-466-4157
cwa0@gte-labs
Learning and Problem Solving
with Multilayer Connectionist Systems
The difficulties of learning in multilayered networks of
computational units has limited the use of connectionist systems in
complex domains. This dissertation elucidates the issues of learning in
a network's hidden units, and reviews methods for addressing these
issues that have been developed through the years. Issues of learning
in hidden units are shown to be analogous to learning issues for
multilayer systems employing symbolic representations.
Comparisons of a number of algorithms for learning in hidden
units are made by applying them in a consistent manner to several tasks.
Recently developed algorithms, including Rumelhart, et al.'s, error
back-propagation algorithm and Barto, et al.'s, reinforcement-learning
algorithms, learn the solutions to the tasks much more successfully than
methods of the past. A novel algorithm is examined that combines
aspects of reinforcement learning and a data-directed search for useful
weights, and is shown to out perform reinforcement-learning algorithms.
A connectionist framework for the learning of strategies is
described which combines the error back-propagation algorithm for
learning in hidden units with Sutton's AHC algorithm to learn evaluation
functions and with a reinforcement-learning algorithm to learn search
heuristics. The generality of this hybrid system is demonstrated
through successful applications to a numerical, pole-balancing task and
to the Tower of Hanoi puzzle. Features developed by the hidden units in
solving these tasks are analyzed. Comparisons with other approaches to
each task are made.
------------------------------
End of AIList Digest
********************
∂10-Oct-86 1438 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #212
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 10 Oct 86 14:38:25 PDT
Date: Fri 10 Oct 1986 09:19-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #212
To: AIList@SRI-STRIPE
AIList Digest Friday, 10 Oct 1986 Volume 4 : Issue 212
Today's Topics:
Query - Integer Equations,
Expert Systems - Mathematical Models,
Philosophy - Man's Uniqueness & Scientific Method &
Understanding Horses & Irrelevance of Searle's Logic
----------------------------------------------------------------------
Date: Tue 7 Oct 86 10:36:01-PDT
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Subject: Integer Equations
RIT Researchers Find Way to Reduce Transmission Errors,
Communications of the ACM, Vol. 29, No. 7, July 1986, p. 702:
Donald Kreher and Stanislaw Radziszowski at Rochester Institute of Technology
have discovered a new geometry, the third 6-design, non-Euclidean geometry,
that allows solution of difficult problems in designing error-correcting
transmission codes. One problem with 99 integer equations and 132 unknowns
was solved in 12 hours; previous search methods would have required several
million centuries.
Integer (Diophantine) equations are notoriously difficult to solve. Is this
a breakthrough for other problem domains where search is used (e.g., bin
packing, traveling salesman, map coloring, and the "approximately-solved"
algorithms)? Is it a form of linear programming?
-- Ken Laws
------------------------------
Date: 6 Oct 1986 13:08:20 EDT
From: David Smith <DAVSMITH@A.ISI.EDU>
Subject: Expert systems and deep knowledge
Grethe Tangen asked about using mathematical models of gas turbines
as deep knowledge sources for diagnostics. GE in Schenectady, NY
are working in this area. Bruce Pomeroy is perhaps the best contact,
and he can be reached by mail to SWEET@a.isi.edu, or by phone
at (518)387-6781. Hope this helps.
DMS
------------------------------
Date: 7 Oct 86 15:52:00 GMT
From: mcvax!unido!ztivax!bandekar@seismo.css.gov
Subject: Mathematical Models
I see some difficulties in using mathematical models of technical systems
as a source of deep knowledge. Mathematical models are usually derived
from the structural information about the devices, and one particular mo-
del can represent more that one physical device. But I guess the approach
would not be impossible as long as you can derive your device structure
from your mathematical model. For example transfer function of several devices
may be mathematically expressed in the same way. For multiple input/output
plants the choice of state variables varies for state space representation.
Which variables are affected if a particular physical component is defective
and the causal ordering of the variables could be a valuable piece of know-
ledge for the purpose of diagnosis. Here, if you can map your model into
structural equations you may compute the causal ordering of the state
variables.[Iwasaki,Simon '86]. Hierarchical representation of the
technical systems is always useful. The concept of views[Struss, 86
to be presented at Sydney Univ. during Feb. 1987] is also important.
If you can tell me more about your problem, I may be able to help out.
my address: ... unido!ztivax!bandekar
Vijay Bandekar
:w
:q
------------------------------
Date: Mon, 6 Oct 86 10:41:45 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: man's godlike
I'm amazed that nobody has responded to Peter Pirron's last argument:
> The belief, that man's cognitive or intelligent abilities will
> never be reached by a machine, is founded in the conscious or
> unconscious assumption of man's godlike or godmade uniqueness,
> which is supported by the religious tradition of our culture. It
> needs a lot of self-reflection, courage and consciousness about
> one's own existential fears to overcome the need of being unique.
> I would claim, that the conviction mentioned above however
> philosophical or sophisticated it may be justified, is only the
> "RATIONALIZATION" (in the psychoanalytic meaning of the word) of
> understandable but irrational and normally unconscious existential
> fears and need of human being.
Even net.ai, which is still a chaos of wild theories, has gone beyond
regarding the a.i. question as a matter of science versus religion.
Some arguments against Pirron's conjecture:
-- If the objection to a.i. is rooted in cultural dogma, it's illogical
to look at the psychology of the individual. Every individual is, now
and always, unique--though some of us may feel that we are too much
like others. This is quite another question than whether our species
is unique.
-- Other animals, and even plants, have intelligence--not to mention
viruses! Many of us regard even a dog's intelligence as beyond the
capabilities of a.i., at least in the way that scientists presently
think about a.i.
-- Even an electric-eye door can be regarded as a successful implementation
of artificial intelligence. We skeptics' greatest doubts tend to focus
on theories of emergent intelligence--theories as attractive to some
modern researchers as the Philosopher's Stone was to medieval researchers,
and (some say) with just as little basis in the nature of things.
-- To divide intelligent beings into men and machines is not necessarily
precise or exhaustive. For example, ghosts may be intelligent
without belonging to either category.
-- A secular equivalent of "godlike uniqueness" is that man is special:
that we mean more to ourselves than does anything else, living or lifeless.
Only a scientist would argue with this. 8 |-I
------------------------------
Date: 9 Oct 86 05:04:59 GMT
From: allegra!princeton!mind!harnad@ucbvax.Berkeley.EDU (Stevan Harnad)
Subject: Re: Turing test - the robot version
>>> instead of a computer trying to fool you in ASCII,
>>> it's a robot trying to fool you in the flesh...
>>> Remember, scientists aren't just trying to make things better for you.
>>> They're also trying to fool you!
The purpose of scientific inquiry is not just to better the human
condition. It is also to understand nature, including human nature.
Nothing can do this more directly than trying to model the mind. But
how can you tell whether your model is veridical? One way is to test
whether its performance is identical with human performance. That's no
guarantee that it's veridical, but there's no guarantee with our
models of physical nature either. These too are underdetermined by
data, as I argue in the papers in question. And besides, the robot
version of the turing test is already the one we use every day, in our
informal solutions to the other-minds problem.
Finally, there's a world of difference, as likewise argued in the
papers, between being able to "fool" someone in symbols and being able
to do it in the flesh-and-blood world of objects and causality. And
before we wax too sceptical about such successes, let's first try to
achieve them.
Stevan Harnad
princeton!mind!harnad
------------------------------
Date: 10 Oct 1986 06:39 EDT (Fri)
From: Wayne McGuire <Wayne%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Understanding Horses
Date: Mon 29 Sep 86 09:55:11-PDT
From: Pat Hayes <PHayes@SRI-KL.ARPA>
Subject: Searle's logic
Look, I also don't think there's any real difference between a human's
knowledge of a horse and [a] machine's manipulation of the symbol it is
using to represent it.
At one end of the human knowledge spectrum we have that knowledge of a
horse which is aware that two horses + two horses = four horses; at the
other end is that sort of rich and unfathomably complex knowledge which is
expressed in a play like Peter Shaffer's ←Equus←, and which fuses, under
the force of sympathetic imagination, conceptual, emotional, biological,
and sensorimotor modes of cognition. I suppose that our most advanced
expert systems at the elementary end of the cognitive spectrum can capture
knowledge about the structural and functional features of a horse, but it
is not clear that any knowledge representation scheme will EVER simulate
what is most interesting about human cognition and which relies on
unconscious and intuitive resources. In one dimension of cognition the
world is a machine, an engineering diagram, which is readily accessible by
bit twiddling models; in another, that of, say, Shakespeare, it is a living
organism, whose parts are infinitely interconnected and partially decrypted
only by the power of the imagination. And so I would argue, with regard to
human and machine cognition of horses or anything else, that there is a
major difference in any dimension of knowledge that counts, and that
repairing automobiles or space stations, and writing or understanding poems
(or understanding the world in the broadest sense), have nearly nothing in
common.
Wayne McGuire
(wayne@oz.ai.mit.edu)
------------------------------
Date: Fri, 10 Oct 86 11:57:31 edt
From: Mike Tanner <tanner@ohio-state.ARPA>
Reply-to: tanner@osu-eddie.UUCP (Mike Tanner)
Subject: Re: Searle's logic
Pat Hayes made some cogent remarks about Searle's problems with AI
being much deeper than the discussion here would indicate. But I
wonder whether the argument is worth the effort.
I have a lot of work to do and only so much time. I can work just
fine on problems of intelligence without worrying about Searle's (or
Dreyfus's) complaints. Just as the working physicist can work all day
without once being bothered by the question of whether quarks *really*
exist, so the working AIer can make progress on his problems without
being bothered by Searle.
-- mike
tanner@ohio-state.arpa
------------------------------
End of AIList Digest
********************
∂14-Oct-86 0015 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #213
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 14 Oct 86 00:00:40 PDT
Date: Mon 13 Oct 1986 21:49-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #213
To: AIList@SRI-STRIPE
AIList Digest Tuesday, 14 Oct 1986 Volume 4 : Issue 213
Today's Topics:
Query - Public Domain Software for Expert Systems,
Expert Systems - Getting Started,
AI Tools - Garbage Collection
----------------------------------------------------------------------
Date: 11 Oct 86 22:06:27 GMT
From: ulysses!mhuxr!mhuxt!houxm!hou2d!meh@ucbvax.Berkeley.EDU (P.MEHROTRA)
Subject: Public Domain Software for Expert Systems
Public Domain Software for Expert Systems
for building expert systems. I work in Unix environment and
have Franz LISP on my system. I already have OPS5. I am especially
interested in tools which use frames and/or semantic networks
for knowledge representation.
Any software or any information where I can get this software
will be greatly appreciated.
Prem K Mehrotra
hou2d!meh speedy!prem
201-615-4535
------------------------------
Date: 12 Oct 86 19:18:22 GMT
From: well!jjacobs@LLL-LCC.ARPA (Jeffrey Jacobs)
Subject: Getting started in Expert Systems
> lem@galbp.UUCP
> Lisa Meyer has requested information on expert systems, PD and PC related
> tools.
Lisa,
I suggest that you start with Waterman's "A Guide to Expert Systems", as
well as looking in your University book store and local commercial
book stores and computer stores. This will give you a working bibliography
to pursue.
Most tools listed in the Waterman book are public domain and can often
be obtained from the respective institution for a nominal price (usually of a
tape).
Periodicals include IEEE Expert (quarterly), AI Expert (monthly), SIGART
(ACM Sig on AI), and AI Magazine (AAAI, quarterly). Also, see the July 86
issue of Computer (IEEE Computer Society).
There are a number of PC tools and languages available. Best place to look
is Byte magazine and various PC magazines. There are a number of LISPs,
PROLOGs and a good Smalltalk available. TI has SCHEME and 2 levels
of Personal Consultant. Insight-2+ has also received good reviews.
For Public Domain PC software, I suggest the CompuServe Information
Service (CIS). There is a Forum sponsored by AI Expert magazine which
has a great deal of PD tools. It's also a great place for getting information
oriented towards PC's.
Also, the following BBS'es:
Boston, Mass. (Common Lisp Group) (617) 492-2399
Woodbury, Conn. (203) 263-5783
Jeffrey M. Jacobs
CONSART Systems Inc.
Technical and Managerial Consultants
P.O. Box 3016, Manhattan Beach, CA 90266
(213)376-3802
CIS:75076,2603
BIX:jeffjacobs
USENET: well!jjacobs
------------------------------
Date: 6 Oct 86 02:38:00 GMT
From: osiris!chandra@uxc.cso.uiuc.edu
Subject: Re: Expert System Wanted
There is no General Purpose expert system in the world.
If you find one, you will probably get the Turing Award.
I will be very happy to recieve more information about
General Purpose Expert Systems. A breakthrough I am looking
forward to.
Please excuse my ignorance about this new technology.
Thanks.
Navin Chandra
MIT
------------------------------
Date: 9 Oct 86 12:21:00 GMT
From: osiris!chandra@uxc.cso.uiuc.edu
Subject: Re: Expert System Wanted
Hi,
FOUND!
There is an expert system shell for CMS. It is called PRISM.
PRISM is also called ESE (expert system environemnt).
It has production rule based programming and a interesting control
structure based on Focus Control Blocks (with inheritance)
ESE is available from IBM itself. It is written in lisp and was most
probably developed at IBM Watson Research Labs.
Navin Chandra
MIT
------------------------------
Date: Mon, 6 Oct 86 10:02:17 cdt
From: preece%ccvaxa@gswd-vms.ARPA (Scott E. Preece)
Subject: Xerox vs Symbolics -- Reference coun
> From: Dan Hoey <hoey@nrl-aic.ARPA>
> Let me first deplore the abuse of language by which it is claimed that
> Xerox has a garbage collector at all. In the language of computer
> science, Xerox reclaims storage using a ``reference counter''
> technique, rather than a ``garbage collector.'' This terminology
> appears in Knuth's 1973 *Art of Computer Programming* and originated in
> papers published in 1960. I remain undecided as to whether Xerox's
> misuse of the term stems from an attempt at conciseness, ignorance of
> standard terminology, or a conscious act of deceit.
----------
Hoey's pedantic insistence on a precision which does not exist in
the "standard terminology" is apparently also an incorrect
characterization of the Xerox approach, which [from the
descriptions I have read] combines some aspects of the "pure"
reference counting approach described by Knuth and some
aspects of "pure" garbage collection.
The Deutsch paper [CACM, 9/76] explicitly separates the two kinds of
storage reclamation techniques and then proposes a combined method
with features of both.
In fact, however, the distinction on which Hoey places
so much importance seems to have mostly vanished from the
literature in the years since Knuth's description (why he places
it in 1973 I don't know, my copy dates to 1968). Many more
recent sources consider reference counting simply
one form of garbage identification. The survey by Cohen
(Computing Surveys, 9/81), for instance, discusses reference counting
and marking as just two alternative ways of identifying garbage.
Gabriel (Performance and Evaluation of Lisp Systems) says of the
Xerox scheme, "Garbage collection is patterned after that
described by [Deutsch, 1976]. A reference count is maintained..."
Moon ("Garbage Collection in a Large Lisp System") discusses
reference counting alternatives under the name garbage collection.
Reference counting seems to have been accepted as a method of
preforming one sub-task of garbage collection; Hoey's nit-picking
is neither productive nor, since the Xerox approach is not pure
reference counting, accurate.
--
scott preece
gould/csd - urbana
uucp: ihnp4!uiucdcs!ccvaxa!preece
arpa: preece@gswd-vms
------------------------------
Date: Mon, 6 Oct 86 10:05 EDT
From: Scott Garren <garren@STONY-BROOK.SCRC.Symbolics.COM>
Subject: Garbage Collection
Relative to discussions of garbage collectors I would like to point out
that there are issues of scale involved. Many techniques that work
admirably on an address space limited to 8 Mbytes (Xerox hardware)
do not scale at all well to systems that support up to 1 Gbytes
(Symbolics).
Non-disclaimer: I am an employee of Symbolics and am of course
emotionally and financially involved in this issue.
------------------------------
Date: Mon, 6 Oct 86 15:39:20 EDT
From: ambar@EDDIE.MIT.EDU (Jean Marie Diaz)
Reply-to: ambar@mit-eddie.UUCP (Jean Marie Diaz)
Subject: Re: Xerox vs Symbolics -- Reference counts vs Garbage collection
In article <8609262352.AA10266@ai.wisc.edu> neves@ai.wisc.edu (David
M. Neves) writes:
>Do current Symbolics users use the garbage collector?
At MIT, yes. I do recall at Rutgers this summer that I was forever
doing a (gc-on), because some user there was turning it off....
--
AMBAR
"Timid entrant into the Rich Rosen School of Computer Learning...."
------------------------------
Date: Tue, 7 Oct 86 12:39:28 edt
From: "Timothy J. Horton" <tjhorton%ai.toronto.edu@CSNET-RELAY.ARPA>
Subject: Re: Xerox vs Symbolics -- Reference counts vs Garbage collection
> When I was using MIT Lisp Machines (soon to become Symbolics) years
> ago nobody used the garbage collector because it slowed down the
> machine and was somewhat buggy. Instead people operated for hours/days
> until they ran out of space and then rebooted the machine. The only
> time I turned on the garbage collector was to compute 10000 factorial.
> Do current Symbolics users use the garbage collector?
>
> "However, it is apparent that reference counters will never
> reclaim circular list structure."
>
> This is a common complaint about reference counters. However I don't
> believe there is very many circular data structures in real Lisp code.
> Has anyone looked into this? Has any Xerox user run out of space
> because of circular data structures in their environment?
>
> --
> David Neves, Computer Sciences Department, University of Wisconsin-Madison
> Usenet: {allegra,heurikon,ihnp4,seismo}!uwvax!neves
> Arpanet: neves@rsch.wisc.edu
In the Xerox environment at least, the extensive use of windows is one of
the most common sources of problems. Often is the case that a window must
be 'related' somehow to another window i.e. you create a subsidiary window
for some main window (as a scroll window is to a display window), and the
two windows must 'know' about each other. The obvious thing is to put
pointers on each window's property list to the other window, "et viola"
a circular list. Everything on the property lists of the two windows
also gets kept around, and since Xerox windows are such good places to
store things the circular structure is often very large. (check out the
stuff on a 'Sketch' window's property list)
A careful programmer can avoid such problems. In the case of windows, one
just has to be careful about how windows find out about one another (some
kind of global variable scheme or a directed search of all windows).
Yet accidents happen and windows can kill the environment fairly quickly.
Yes, I have lost an environment to just this problem (that's why I know),
and it's very hard to tell what happened after the fact.
------------------------------
Date: Fri 10 Oct 86 10:57:53-PDT
From: Keith Price <PRICE%GANELON@usc-oberon.ARPA>
Subject: Garbage collection
The experience in our lab is that there is no garbage collection
other than the Ephemeral GC; the "traditional" GC is never needed or
executed and programs are faster with the Ephemeral GC on than with it
off. I can't say whether it is "better" than Xerox or the new LMI GC,
but it is clear that "traditional" GC is a thing of the past for most
Lisp work stations already and comparisons to such old methods do not
contribute to the knowledge pool.
K. Price.
price%ganelon@usc-ecl
------------------------------
End of AIList Digest
********************
∂14-Oct-86 1234 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #214
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 14 Oct 86 12:34:17 PDT
Date: Tue 14 Oct 1986 09:44-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #214
To: AIList@SRI-STRIPE
AIList Digest Tuesday, 14 Oct 1986 Volume 4 : Issue 214
Today's Topics:
Query - Connectionist References,
Applications - Animal Rule-Systems Simulations,
Expert Systems - Coupling Numeric and Symbolic Computing,
Survey - Interactive 2-D Math Editing Interfaces
----------------------------------------------------------------------
Date: Tue, 14 Oct 86 14:48:20 nzs
From: ubc-vision!calgary!vuw90x!paul@seismo.CSS.GOV (Paul Fitchett)
Reply-to: ubc-vision!vuwcomp!paul@seismo.CSS.GOV (Paul Fitchett)
Subject: Connectionist References Request
Recent items in mod.ai have piqued my interest about "connectionist"
ideas in AI. I wonder if anyone could provide a number of references
that are a good introduction to the ideas in this area.
The little I've read makes them seem like perceptrons -- I hope not.
If replying by email please use one of the paths below.
Thanks,
Paul
Paul Fitchett
uucp : ...!{ubc-vision, alberta}!calgary!vuwcomp!paul
ACSnet : paul@vuwcomp.nz
subethernet : ...local-group!milky-way!sol!terra!nz!vuwcomp!paul :-)
------------------------------
Date: 13 Oct 86 18:14:19 GMT
From: ucdavis!deneb!g451252772ea@ucbvax.Berkeley.EDU (g451252772ea)
Subject: animal rule-systems simulations
By way of introduction to the following Mail message, 'bc' posted last
spring a query for anyone with references on 'simulation animal behavior using
rule-driven systems'. I discovered his message in an old listing, and find
the topic of interest also. In case 'bc' (William Coderre) is no longer
at mit-amt.MIT.EDU, I'm posting to the net also. Thanks for your tolerance...
Hi, bc (?bc?):
I'm curious about any replies you got to your query last April for
rule-driven simulations of animal behaviors. I have somewhat
similar interests, reflecting my grad work in ethology here at Davis, and my
undergrad work at U.C. Santa Cruz in Information- Computer science. We have
here a person doing stuff you'd enjoy: Marc Mangel, with his 'dynamic
stochastic optimization' analysis of everything from insect oviposition
choices to foraging theory to fisheries harvesting. His insight seems to be
the addition of a 'state variable' - usually characterized as energy
reserves, gut contents or similar - to revamp the static optimization models
of Houston, McNamara, Krebs, et al. (the 'Oxford' crowd). Mangel is chair
of the math dept. here, and co-authors with Colin Clark of the U. British
Columbia. Clark is visiting here this quarter and giving an applied math
seminar, with lots of application studies. Both guys emphasize computer
programs, and the programs have a game-like air to them. If you'd like more
info, I can send some typed notes by Mangel describing the analysis, and one
of his most counter-intuitive applications.
Mostly I'm working with evolutionary studies: the predator/prey
interactions of snakes and ground squirrels (my thesis is on stupidity: the
dumbness of Arctic ground squirrels, which don't even appear to <recognize>
snakes of any kind, much less handle them correctly). I do have to give a
week's worth of lectures to my animal-behavior group next month, explaining
'artificial intelligence' ab initio to them. Despite Mangel and Clark, the
prejudice against math/systems here is substantial. Any ideas you have for
good material/examples, in the vein of Winograd's new book or Rosen's
discussion of ANTICIPATORY SYSTEMS (much watered down!) or ANYTHING else,
would be most welcome!
Thanks --Ron.
------------------------------
Date: Fri 10 Oct 86 15:00:40-EDT
From: Albert Boulanger <ABOULANGER@G.BBN.COM>
Subject: Expert Systems & Math Models
Check out the book:
Coupling Symbolic and Numerical Computing in Expert Systems
Edited by J.S. Kowalik and is based on the workshop by this name held
at Bellevue, Washington 27-29 August, 1985.
Elsevier, 1986
------------------------------
Date: 0 0 00:00:00 PDT
From: "LLLASD::GARBARINI" <garbarini%lllasd.DECNET@lll-crg.arpa>
Reply-to: "LLLASD::GARBARINI" <garbarini@lllasd.decnet>
Subject: RE: Availability of interactive 2-d math editing
interfaces...
The following is a summary of responses to my query on 2-d math editing
interfaces.
I'd like to thank everyone who responded. I hope to eventually respond
to each of you individually.
----------
Joe P. Garbarini Jr.
Lawrence Livermore National Lab
P. O. Box 808 , L-308
7000 East Avenue
Livermore Ca. , 94550
arpanet address: GARBARINI%LLLASD.DECNET@LLL-ICDC.ARPA
----------
The original query:
-----
I am working with a number of other people on a project called Automatic
Programming for Physics. The goal is to build an AI based automatic
programming system to aid scientist in the building of numerical
simulations of physical systems.
In the user interface to the system we would like to have interactive
editing of mathematical expressions in two-dimensional form.
It seems a number of people have recently made much progress in this
area. (See C. Smith and N. Soiffer, "MathScribe: A User Interface for
Computer Algebra Systems," Conference Proceedings of Symsac 86, (July,
1986) and B. Leong, "Iris: Design of a User Interface Program for
Symbolic Algebra," Proc. 1986 ACM-SIGSAM Symposium on Symbolic and
Algebraic Manipulation, July 1986.)
Not wishing to reinvent the wheel, I'd appreciate receiving information
regarding the availability of any such interface.
=======================================================================
From: Joe Garbarini (Yes, this is from me!)
MathSoft makes a product call MathCAD which has an interactive 2-d math
interface. Currently runs on IBM PCs.
Mathsoft, Inc.
One Kendal Square, Bldg. 100
Cambridge, MA 02139
800-628-4223
-----
From: James E. O'Dell <jim@ACG.arpa>
Normal MACSYMA has 2-d editing done by Carl Hoffman and Rich Zippel.
I think references to it can be found in one or the other of the
Proceedings of the MACSYMA Users Group.
Jim
-----
From: fateman@dali.Berkeley.EDU (Richard Fateman)
There is some stuff on Sun-2 equipment working with macsyma
here at UC Berkeley. The MathScribe stuff is currently nicer
looking in my opinion, but people are still working on stuff
here.
<and from elsewhere this:>
-----
A version of Macsyma for the VAX computer, including sources and binaries
for Macsyma and the underlying Lisp (Franz Lisp opus 38.91), is in the
National Energy Software Center library. (Argonne, IL.)
(312) 972-7172. This should run without change on 4.3BSD UNIX or ULTRIX.
This version, which was developed at the University of California, has
also been run, with modifications, on various other (non-VAX) systems
which support the Franz Lisp dialect. For information on Franz Lisp
for VAX/VMS or other computers, you might wish to contact your hardware
vendor or Franz Inc. in Alameda CA, (415) 769-5656. (for many mainframe
and workstation computers)
Vaxima uses about 4.5 megabytes of address space to start up, and
as configured, can grow to 6.5 megabytes or so. By changing a compile-time
parameter in the Lisp system, the system may be configured to grow much
larger. (We have run a 53 megabyte system on a VAX 8600).
At UC Berkeley we have been using this code on Sun-2 and Sun-3 systems,
and microVAX-II's.
Vaxima is quite fast when given enough physical memory, and appears at this time
to be very cost-effective compared to implementations on special-purpose
Lisp machines or "DOE-MACSYMA" for VAX/VMS.
There are a number of packages that have been developed to work with this
program (e.g. user interfaces, better algorithms for factoring, graphics,
an interface to the Numerical Algorithms Group (NAG) library).
Neither UC Berkeley nor by NESC provides support for vaxima.
Richard J. Fateman, U.C. Berkeley
-----
From: arnon.pa@xerox.com
For some years Xerox has offered software for interactive
two-dimensional editing of mathematical expressions as part of its Star,
and now Viewpoint, systems. Regrettably Viewpoint runs only on Xerox
workstations.
As part of the research programming environment at Xerox PARC, we have a
more powerful math editing and display package which is roughly the
equivalent of MathScribe. A noteworthy property of our package is that
a math expression can be moved interchangeably between the editor, a
technical document, and a system for symbolic mathematical computation.
I'd be happy to discuss or demo.
Dennis Arnon
Computer Science Laboratory
Xerox Palo Alto Research Center
3333 Coyote Hill Road
Palo Alto CA 94304
(415) 494-4425
-----
From: seismo!Xerox.COM!Kelley.pa
While not oriented specifically to physical systems, STELLA for the
Macintosh from High-Performance Systems, Inc., 13 Dartmouth College
Highway, Lyme, New Hampshire 03768. has a very nice user interface for
help in building mathematical models. It is oriented toward Forrester's
Systems Dynamics simulation paradigm. The user interface is worth
examining.
-- kirk
-----
From: mcgeer%sirius.Berkeley.EDU@BERKELEY.EDU (Rick McGeer)
soiffer@dali.berkeley.edu
carolyn@dali.berkeley.edu
are the addresses of Neil Soiffer and Carolyn Smith, respectively. Benton
Leong is at
allegra!watmath!blleong
<Rick, thanks for the info for the rest of the project ---JPG>
-- Rick
-----
From: Doug A. Young <dayoung%hplabsc@hplabs.HP.COM>
Tony Hearn forwarded a message from you a while back to me, asking
about an interface for algebra systems. I did my masters thesis on
a graphical multi-window system for Macsyma. If you are interested,
I could forward some information on it to you. Contact me at
dayoung@hplabs
Doug Young
-----
From: Bill Schelter <ATP.SCHELTER@R20.UTEXAS.EDU>
I have an emacs like editor which allows the display
of mathematics as well as textual material.
You can mouse into the superscript position etc.
The system is called INFOR, and is available for lisp
machines (ymbolics and TI
I also ported the version of macsyma from doe, to run
on those machines. It would be fairly easy to
connect the two, since they are running in the same memory
space.
The display is very good, at least as good as TEX.
You can actually create a dvi file directly from the editor.
This can be then printed to obtain very high quality output.
Bill Schelter
-------
From: mcvax!fransh@seismo.CSS.GOV (Frans Heeman)
Some while ago, you put a query on the news about the availability
of interactive 2-d math editing interfaces. We are working on
a formula-editor (NOT a formula-manipulator). The idea is as
follows:
For example, the user gives the command for a fraction
to be entered. On the screen a small horizontal bar
is displayed, and the cursor is positioned above
this bar. The user types in the nominator.
While typing, the fraction-bar remains as long as the
nominator. Next the user gives the END-command, to indicate
the end of the nominator. Now the cursor is centered under
the fraction-bar, and the user types in the denominator. While
typing, the nominator and denominator remain centered with
respect to the fraction-bar, and the fraction-bar remains as
long as the longest of the nominator and denominator.
By means of keyboard and mouse (menu's) the user can enter a
mathematical formula. While typing, the formula is at every
moment displayed on the screen in its current 2-dimensional
form. The system can handle mathematical constructs as fraction,
root, integral, matrices, etc. The system also handles greek and
italic characters. The constructs the system can handle are specified
in an external grammar, so it is relatively easy to add or change
constructs. The way the formula is displayed on the screen is
also specified in this grammar. It is possible to get a
hard-copy of the formula: this is done by generating 'eqn'-code
for the formula, and then get a typeset result by using
'eqn' and 'troff' (part of the UNIX operating system). Finally, a
formula can be saved, retrieved and edited.
Our system is still under development and not yet available
for 'real' use. The final goal is to make a document-editor for
text, tables, mathematical formulae and (simple) pictures.
In France, Vincent Quint et al is doing work along these
lines too with a system previously called 'Edimath', currently
called 'Grif'.
References:
We have only published an internal report (in dutch), and are
currently preparing an English article to be published.
Edimath:
V. Quint (March 1983),
"An Interactive System for Mathematical Text
Processing",
Technology and Science of Informatics, vol. 2, nr. 3,
pp. 169-179.
Grif:
V. Quint, I. Vatton (April 1986),
"Grif: An Interactive System for Structured Document
Manipulation",
Proceedings of the Conference on Text Processing and
Document Manipulation,
Nottingham, England, 1986.
Frans C. Heeman
Centre for Mathematics and Computer Science (CWI)
P. O. Box 4079
1009 AB Amsterdam
The Netherlands
fransh@mvcax.UUCP
------------------------------
End of AIList Digest
********************
∂14-Oct-86 1615 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #215
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 14 Oct 86 16:14:30 PDT
Date: Tue 14 Oct 1986 09:51-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #215
To: AIList@SRI-STRIPE
AIList Digest Tuesday, 14 Oct 1986 Volume 4 : Issue 215
Today's Topics:
Binding - David Etherington,
Review - Spang Robinson Report, October 1986,
Survey - Intelligent Tutoring Systems
----------------------------------------------------------------------
Date: Tue 7 Oct 1986 16:24:42
From: ether.allegra%btl.csnet@CSNET-RELAY.ARPA
Subject: Binding
David Etherington, from the University of British Columbia
to AT&T Bell Laboratories, AI Principles Research Department.
Addresses:
ether%allegra@btl.csnet
and
David W. Etherington,
AT&T Bell Laboratories,
600 Mountain Avenue,
Murray Hill, NJ, 07974-2070
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Spang Robinson Report, October 1986
Summary of Volume 2 No 10
Discussion of S.1, ART, KEE discussing user interface, performance, features.
A common problem was done in all three applications. The person who did
the evaluation started the exercise with the impression that one was
best off starting in scratch. This was changed after the evaluation was
completed.
Also includes a two page table giving features, operating system and
cost for various expert system building tools including micro-based tools
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Japan Watch
MITI has budgeted the following for AI type products
$400,000 diagnosis support systems
2.4 million robotics
1.1 million language translation systems
$234,000 factory automation R&D
The Patent Office has budgeted $162,330 for machine translation
The National Police Agency will be putting $150,000 on automated voice
recognition
$84,400 for recognition systems
$84,400 for graphology systems
The National Agency of Science and Technology budged 1.9 million on a
machine translation system
The Ministry of Agriculture and Foresty is doing research on expert systems
in agricultural production control
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Short Notes
Cognitive Systems quarterly revenues were $585,000 with a loss of $31,000
Frey Associates which sells THEMIS, a natural language product, made
$102,820 on revenues of $1,408,529 in the last quarter
Intellicorp expects a substantial loss this quarter.
Dr. Robert Moore, previously of Lisp Machine Co, is now president of
GENSYM, which anticipates doing work in real time applications of AI.
Eloquent Systems Corporations has produced an expert system for
hotels, motels, etc. to optimize occupancy and profits. It runs on
an Explorer with a special card for multi processing.
Sperry has developed expert systems for configuring shipboard software
systems, a Tactical Information System for monitoring reports of ship and
aircraft locations, testing PC boards, correlating
contact reports for the NAVY and a system software diagnosis system.
Sanders Associates is undertaking a Defense Department Study to
set standards for developing AI systems. DOD expects to release a set
of AI software development standards within three to five years.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Reviews of
Applications of Artificial Intelligence III, proceedings of SPIE's
conference in Orlando
Expert System 85, Fifth Technical Conference of the British Computer Society's
Sppecialists Group on Expert Systems
Artificial Intelligence and Statistics, which includes most of the Articles
of Workshop on Artifiicial Intelligence and Statistics, April 1985
Artificial Intelligence with Statistical Pattern Recognition
Lisp Lore: A Guide to Programming the LISP Machine
------------------------------
Date: Sun, 28 Sep 86 00:49:35 BST
From: YAZDANI%UK.AC.EXETER.PC@AC.UK
Subject: A survey of prototype and working ITSs.
[Forwarded from the AI-Ed digest by Laws@SRI-STRIPE.]
Here I present a survey of Intelligent Tutoring systems which,
although not exhaustive, is intended to be a source of reference for further
development. I would like to know of other systems which I should add,
prefably getting enteries in my proposed format. However, if you can't
spend the time to do this if you just send me any references you may
have I shall try and extract the information myself. Also if you would
like to suggest changes to the format to make it more useful please
do so. I would like to send the final version to somewhere( like AI mag.
for publication) and shall acknowledge any help I get.
You can't use REPLY to get to me so you need to SEND me Email to
YAZDANI%UK.AC.EXETER.PC@UCL-CS.arpa
or post to
Dept. of Computer Sceince
University of Exeter
Prince of Wales Road
EXETER EX4 4PT
ENGLAND
Thanks
←←←←←←
ACE
Subject: Nuclear Magnetic Spectroscopy
Aim: Monitor Deductive Reasonsing
Features: Problem solving monitor, accepts natural language input
System: MODULAR ONE
Reference:
Sleeman, D.H., and Hendley, R. J. (1982)
ACE: a system which analyses complex explanations
in Sleeman and Brown (eds.)
BUGGY & DEBUGGY
Subject: Arithmetic
Aim: Diagnose bugs from behaviour
Features: Procedural representation of misconceptions (bugs),
hypothesis generation, problem generation system: LISP
System: LISP
Reference:
Brown, R.R. (1982)
Diagnosing bugs in simple procedural skills
in Sleeman & Brown (eds.)
BLOCKS
Subject: Blocks game
Aim: Diagnosis
System: LISP
Reference:
Brown, J.S. and Brown, R. R. (1978)
"A paradigmatic example of an artificially intelligent
instructional system"
Int. J. of Man-Machine Studies Vol., 10, pp.232-339.
FGA
Subject: French Grammar
Aim: Analyse free form French sentences
Features: Separation of dictionary, grammar, parser and error
reporting, general shell idea, human controlled
teaching strategy
System: PROLOG
Reference:
Barchan, J. Woodmansee, B.J. and Yazdani, M. (1985)
"A Prolog-based tool for French Grammar Analysis"
Instructional Science, Vol. 14
GUIDON
Subject: Medical diagnosis
Aim: Using MYCIN for tutoring
Features: Overlay student model, case method, separate of domain
knowledge from teaching expertise
System: LISP
Reference:
Clancey, W.J. (1979)
Tutoring rules for guiding a case method dialogue
in Int. J. of Man-Machine Studies, Vol. 11 pp 25-49.
GEOMETRY Tutor
Subject: Geometry
Aim: Monitoring geometry proof problems
Features: Use of production rules to represent 'ideal student
model' and 'bug catalogue'
System: Franz LISP
Reference:
Anderson, J.R., Boyle, C.F. and Yost, G.
The Geometry Tutor
Proceedings of IJCAI-85
INTEGRATION
Subject: Calculus
Aim: To deal with student initiated examples of symbolic
integration
Features: Self-improvement
System: LISP
Reference:
Kimbal, R. (1982)
A self-improving tutor for symbolic integration
in Sleeman and Brown (eds.)
LISP Tutor
Subject: LISP programming
Aim: Teaching of introductory LISP programming
Features: Using deviation from ideal student model
System: Franz LISP on VAX
Reference:
Anderson, J.R. and Reiser, B. (1985)
The LISP Tutor
in Byte Vol. 10 No. 4
LMS (Pixie)
Subject: Algebra equetion solving
Aim: Building student models
Features: Given problems and students answers it hypothesizes
models for them; uses rules and mal-rules.
System: LISP
Reference:
Sleeman, D.A. (1983)
Inferring student models for intelligent computer-aided
instruction
in Michalsky, R. Carbonnel, J. and Mitchell, T. (eds.)
Machine Learning
Springer-Verlag/Toga Press
MENO
Subject: Pascal programming
Aim: Tutoring novice programmers in the use of planning
Features: Hierarchical representation of correct and incorrect plans
System: LISP
Reference:
Woolf, B. and McDonald, D.D.(1984)
Building a computer tutor: design issues
IEEE Computers Sept. issue, pp. 6l-73
MACSYMA ADVISOR
Subject: Use of MACSYMA
Aim: Articulate users misconceptions about MACSYMA
Features: Representation of plans
System: LISP
Reference:
Genesreth, M.R. (1977)
An automated consultant for MACSYMA
Proceedings of IJCAI-77
NEOMYCIN
Subject: Medical diagnosis
Aim: Using expert systems for tutoring
Features: Separate of domain knowledge from teaching expertise,
automatic explanation of experts' reasoning
System: LISP
Reference:
Hasling, D.W., Clancey, W.J. and Rennels, G. (1984)
Strategic explanations for a dioagnostic consultation system
PROUST
Subject: Pascal programming
Aim: Automatic debugger and tutor
Features: Use of problem descriptions
System: GCL LISP on IBM PC (micro-PROUST), LISP on VAXs
Reference:
Johnson, W. L. and Soloway, E. (1985)
PROUST
in Byte Vol. 10, No. 4.
QUADRATIC tutor
Subject: Calculus
Aim: Teaching quadrantic equations
Features: Teaching strategy represented as a set of production rules
System: LISP
Reference:
O'Shea, T. (1982)
A self-improving quadratic tutor
in Sleeman and Brown (eds.)
Scholar
Subject: Geography
Aim: Provide mixed-initiative dialogue
Features: Semantic network representation of knowledge
System: LISP
Reference:
Carbonnel, J.R. and Collins, A. (1973)
"Natural Semantics in Artificial Intelligence"
Proceedings of IJCAI-73
Proceedings of IJCAI-85
SOPHIE
Subject: Electronic trouble shooting
Aim: Teaching how an expert trouble shooter copes with rare
faults
Features: Semantic grammar for natural language diaglogue,
qualitative knowledge plus simulation, multiple
knowledge sources
System: LISP
Reference:
Brown, J.S., Burton, R. R. and de Kleer, J. (1982)
"Pedagogical, natural language and knowledge engineering
techniques in SOPHIE I, II and III
in Sleeman, D. and Brown, J.S. (eds.)B
SPADE
Subject: LOGO programming
Aim: To facilitate the acquisition of programming skills
Features: Intelligent editor which prompts the student with
menu of design alternatives
Reference:
Miller, M.L. (1982))
A Structured Planning and Debugging Environment
Inferring student models for intelligent computer-aided
in Sleeman and Brown (eds.)
STEAMER
Subject: Steam plant operation
Aim: Convey qualitative model of a steam plant operation
Features: Good graphics and mathematical model of the plant
System: LISP
Reference:
Holland, J.D., Hutchins, E. L. and Weitzmann, L. (1984)
"STEAMER: An interative inspectable simutation based
training system"
in The AI Magazine, Vol. 5. No. 2
TUTOR
Subject: Highway Code
Aim: Prototype framework for a wide variety of subjects
Features: Semantic grammar implemented in definite clause grammar,
representing value clusters, "what if" facility
System: Prolog on VAX and IBM PC AT
Reference:
Davies, N., Dickens, S. and Ford, L. (1985)
"TUTOR": A prototype ICAI system"
in M. Bramer (ed.) 'Research and Development in Expert
Systems'
Cambridge University Press
WEST
Subject: How the West was Won
Aim: Drill and Practice in arithmetic
Features: Hierarchical representation of correct and incorrect plans
System: PLATO
Reference:
Comparison of students' moves with experts' moves,
student model and diagnostic strategies, tutoring expert
WHY
Subject: Meteorology
Aim: Tutoring students about processes involved in rainfall
Features: Multiple representations in direct tuition
System: LISP
Reference:
Stevens, A. and Goldin, S. F. (1982)
Misdconceptions in student understanding
in Sleeman and Brown (eds.)
WUSOR
Subject: Maze exploration game (Wumpus)
Aim: Teaching logic and probability
Features: Graph structure whose nodes represent rules
System: LISP
Reference:
Goldstein, I. (1982)
"The genetic graph: A representation for evlution of
procedural knowledge"
in Sleeman and Brown (eds.)
------------------------------
End of AIList Digest
********************
∂16-Oct-86 0008 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #216
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 16 Oct 86 00:07:57 PDT
Date: Wed 15 Oct 1986 21:58-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #216
To: AIList@SRI-STRIPE
AIList Digest Thursday, 16 Oct 1986 Volume 4 : Issue 216
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 9 Oct 86 15:23:35 GMT
From: cbatt!ukma!drew@ucbvax.Berkeley.EDU (Andrew Lawson)
Subject: Re: Searle, Turing, Symbols, Categories (Question not comment)
In article <160@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>On my argument the distinction between the two versions is critical,
>because the linguistic version can (in principle) be accomplished by
>nothing but symbols-in/symbols-out (and symbols in between) whereas
>the robotic version necessarily calls for non-symbolic processes
>(transducer, effector, analog and A/D).
This is not clear. When I look at my surroundings, you are no
more than a symbol (just as is anything outside of my being).
Remember that "symbol" is not rigidly defined most of the time.
When I recognize the symbol of a car heading toward me, I respond
by moving out of the way. This is not essentially different from
a linguistic system recognizing a symbol and responding with another
symbol.
--
Drew Lawson cbosgd!ukma!drew
"Parts is parts." drew@uky.csnet
drew@UKMA.BITNET
------------------------------
Date: 6 Oct 86 18:15:42 GMT
From: mnetor!utzoo!utcsri!utai!me@seismo.css.gov (Daniel Simon)
Subject: Re: Searle, Turing, Symbols, Categories (Question not comment)
In article <160@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>In reply to (1): The linguistic version of the turing test (turing's
>original version) is restricted to linguistic interactions:
>Language-in/Language-out. The robotic version requires the candidate
>system to operate on objects in the world. In both cases the (turing)
>criterion is whether the system can PERFORM indistinguishably from a human
>being. (The original version was proposed largely so that your
>judgment would not be prejudiced by the system's nonhuman appearance.)
>
I have no idea if this is a relevant issue or a relevant place to bring it up,
but this whole business of the Turing test makes me profoundly suspicious. For
example, we all know about Weizenbaum's ELIZA, which, he claimed, convinced
many clever, relatively computer-literate (for their day) people that it was
intelligent. This fact leads me to some questions which, in my view, ought to
be seriously addressed before the phrase "Turing test" is bandied about (and
probably already have been addressed, but I didn't notice, and will thank
everybody in advance for telling me where to find a treatment of them and
asking me to kindly buzz off):
1) To what extent is our discernment of intelligent behaviour context-
dependent? ELIZA was able to appear intelligent because of the
clever choice of context (in a Rogerian therapy session, the kind
of dull, repetitive comments made by ELIZA seem perfectly
appropriate, and hence, intelligent). Mr. Harnad has brought up
the problem of physical appearance as a prejudicing factor in the
assessment of "human" qualities like intelligence. Might not the
robot version lead to the opposite problem of testers being
insufficiently skeptical of a machine with human appearance (or
even of a machine so unlike a human being in appearance that mildly
human-like behaviour takes on an exaggerated significance in the
tester's mind)? Is it ever possible to trust the results of any
instance of the test as being a true indicator of the properties of
the tested entity itself, rather than those of the environment in
which it was tested?
2) Assuming that some "neutral" context can be found which would not
"distort" the results of the test (and I'm not at all convinced
that such a context exists, or even that the idea of such a context
has any meaning), what would be so magic about the level of
perceptiveness of the shrewdest, most perspicacious tester
available, that would make his inability to distinguish man from
machine in some instance the official criterion by which to judge
intelligence? In short, what does passing (or failing) the Turing
test really mean?
3) If the Turing test is in fact an unacceptable standard, and
building a machine that can pass it an inappropriate goal (and, as
questions 1 and 2 have probably already suggested, this is what I
strongly suspect), are there more appropriate means by which we
could evaluate the human-like or intelligent properties of an AI
system? In effect, is it possible to formulate the qualities that
constitute intelligence in a manner which is more intuitively
satisfying than the standard AI stuff about reasoning, but still
more rigorous than the Turing test?
As I said, I don't know if my questions are legitimate, or if they have already
been satisfactorily resolved, or if they belong elsewhere; I merely bring them
up here because this is the first place I have seen the Turing test brought up
in a long time. I am eager to see what others have to say on the subject.
>Stevan Harnad
>princeton!mind!harnad
Daniel R. Simon
"Look at them yo-yo's, that's the way to do it
Ya go to grad school, get your PhD"
------------------------------
Date: 10 Oct 86 13:47:46 GMT
From: rutgers!princeton!mind!harnad@think.com (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
In response to what I wrote in article <160@mind.UUCP>, namely:
>On my argument the distinction between the two versions
>[of the turing test] is critical,
>because the linguistic version can (in principle) be accomplished by
>nothing but symbols-in/symbols-out (and symbols in between) whereas
>the robotic version necessarily calls for non-symbolic processes
>(transducer, effector, analog and A/D).
Drew Lawson replies:
> This is not clear. When I look at my surroundings, you are no
> more than a symbol (just as is anything outside of my being).
> Remember that "symbol" is not rigidly defined most of the time.
> When I recognize the symbol of a car heading toward me, I respond
> by moving out of the way. This is not essentially different from
> a linguistic system recognizing a symbol and responding with another
> symbol.
It's important, when talking about what is and is not a symbol, to
speak literally and not symbolically. What I mean by a symbol is an
arbitrary formal token, physically instantiated in some way (e.g., as
a mark on a piece of paper or the state of a 0/1 circuit in a
machine) and manipulated according to certain formal rules. The
critical thing is that the rules are syntactic, that is, the symbol is
manipulated on the basis of its shape only -- which is arbitrary,
apart from the role it plays in the formal conventions of the syntax
in question. The symbol is not manipulated in virtue of its "meaning."
Its meaning is simply an interpretation we attach to the formal
goings-on. Nor is it manipulated in virtue of a relation of
resemblance to whatever "objects" it may stand for in the outside
world, or in virtue of any causal connection with them. Those
relations are likewise mediated only by our interpretations.
This is why the distinction between symbolic and nonsymbolic processes
in cognition (and robotics) is so important. It will not do to simply
wax figurative on what counts as a symbol. If I'm allowed to use the
word metaphorically, of course everything's a "symbol." But if I stick
to a specific, physically realizable sense of the word, then it
becomes a profound theoretical problem just exactly how I (or any
device) can recognize you, or a car, or anything else, and how I (or it)
can interact with such external objects robotically. And the burden of
my paper is to show that this capacity depends crucially on nonsymbolic
processes.
Finally, apart from the temptation to lapse into metaphor about
"symbols," there is also the everpresent lure of phenomenology in
contemplating such matters. For, apart from my robotic capacity to
interact with objects in the world -- to recognize them, manipulate
them, name them, describe them -- there is also my concsiousness: My
subjective sense, accompanying all these capacities, of what it's
like (qualitatively) to recognize, manipulate, etc. That, as I argue
in another paper (and only hint at in the two under discussion), is a
problem that we'd do best to steer clear of in AI, robotics and
cognitive modeling, at least for the time being. We already have our hands
full coming up with a model that can successfully pass the (robotic
and/or linguistic) turing test -- i.e., perform exactly AS IF it had
subjective experiences, the way we do, while it successfully accomplishes
all those clever things. Until we manage that, let's not worry too much
about whether the outcome will indeed be merely "as if." Overinterpreting
our tools phenomenologically is just as unproductive as overinterpreting them
metaphorically.
Stevan Harnad
princeton!mind!harnad
------------------------------
End of AIList Digest
********************
∂16-Oct-86 0248 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #217
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 16 Oct 86 02:45:01 PDT
Date: Wed 15 Oct 1986 22:01-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #217
To: AIList@SRI-STRIPE
AIList Digest Thursday, 16 Oct 1986 Volume 4 : Issue 217
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 10 Oct 86 15:50:33 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
In response to my article <160@mind.UUCP>, Daniel R. Simon asks:
> 1) To what extent is our discernment of intelligent behaviour
> context-dependent?...Might not the robot version [of the
> turing test] lead to the...problem of testers being
> insufficiently skeptical of a machine with human appearance?
> ...Is it ever possible to trust the results of any
> instance of the test...?
My reply to these questions is quite explicit in the papers in
question: The turing test has two components, (i) a formal, empirical one,
and (ii) an informal, intuitive one. The formal empirical component (i)
is the requirement that the system being tested be able to generate human
performance (be it robotic or linguistic). That's the nontrivial
burden that will occupy theorists for at least decades to come, as we
converge on (what I've called) the "total" turing test -- a model that
exhibits all of our robotic and lingistic capacities. The informal,
intuitive component (ii) is that the system in question must perform in a
way that is indistinguishable from the performance of a person, as
judged by a person.
It is not always clear which of the two components a sceptic is
worrying about. It's usually (ii), because who can quarrel with the
principle that a veridical model should have all of our performance
capacities? Now the only reply I have for the sceptic about (ii) is
that he should remember that he has nothing MORE than that to go on in
the case of any other mind than his own. In other words, there is no
rational reason for being more sceptical about robots' minds (if we
can't tell their performance apart from that of people) than about
(other) peoples' minds. The turing test is ALREADY the informal way we
contend with the "other-minds" problem [i.e., how can you be sure
anyone else but you has a mind, rather than merely acting AS IF it had
a mind?], so why should we demand more in the case of robots? It's
surely not because of any intuitive or a priori knowledge we have
about the FUNCTIONAL basis of our own minds, otherwise we could have put
those intuitive ideas to work in designing successful candidates for the
turing test long ago.
So, since we have absolutely no intuitive idea about the functional
(symbolic, nonsymbolic, physical, causal) basis of the mind, our only
nonarbitrary basis for discriminating robots from people remains their
performance.
As to "context," as I argue in the paper, the only one that is
ultimately defensible is the "total" turing test, since there is no
evidence at all that either capacities or contexts are modular. The
degrees of freedom of a successful total-turing model are then reduced
to the usual underdetermination of scientific theory by data. (It's always
possible to carp at a physicist that his theoretic model of the
universe "is turing-indistinguishable from the real one, but how can
you be sure it's `really true' of the world?")
> 2) Assuming that some "neutral" context can be found...
> what does passing (or failing) the Turing test really mean?
It means you've successfully modelled the objective observables under
investigation. No empirical science can offer more. And the only
"neutral" context is the total turing test (which, like all inductive
contexts, always has an open end, namely, the everpresent possibility
that things could turn out differently tomorrow -- philosophers call
this "inductive risk," and all empirical inquiry is vulnerable to it).
> 3) ...are there more appropriate means by which we
> could evaluate the human-like or intelligent properties of an AI
> system? ...is it possible to formulate the qualities that
> constitute intelligence in a manner which is more intuitively
> satisfying than the standard AI stuff about reasoning, but still
> more rigorous than the Turing test?
I don't think there's anything more rigorous than the total turing
test since, when formulated in the suitably generalized way I
describe, it can be seen to be identical to the empirical criterion for
all of the objective sciences. Residual doubts about it come from
four sources, as far as I can make out, and only one of these is
legitimate. The legitimate one (a) is doubts about autonomous
symbolic processes (that's what my papers are about). The three
illegitimate ones (in my view) are (b) misplaced doubts about
underdetermination and inductive risk, (c) misplaced hold-outs for
the nervous system, and (d) misplaced hold-outs for consciousness.
For (a), read my papers. I've sketched an answer to (b) above.
The quick answer to (c) [brain bias] -- apart from the usual
structure/function and multiple-realizability arguments in engineering,
computer science and biology -- is that as one approaches the
asymptotic Total Turing Test, any objective aspect of brain
"performance" that anyone believes is relevant -- reaction time,
effects of damage, effects of chemicals -- is legitimate performance
data too, including microperformance (like pupillary dilation,
heart-rate and perhaps even synactic transmission). I believe that
sorting out how much of that is really relevant will only amount to the
fine-tuning -- the final leg of our trek to theoretic Utopia,
with most of the substantive theoretical work already behind us.
Finally, my reply to (d) [mind bias] is that holding out for
consciousness is a red herring. Either our functional attempts to
model performance will indeed "capture" consciousness at some point, or
they won't. If we do capture it, the only ones that will ever know for
sure that we've succeeded are our robots. If we don't capture it,
then we're stuck with a second level of underdetermination -- call it
"subjective" underdetermination -- to add to our familiar objective
underdetermination (b): Objective underdetermination is the usual
underdetermination of objective theories by objective data; i.e., there
may be more than one way to skin a cat; we may not happen to have
converged on nature's way in any of our theories, and we'll never be
able to know for sure. The subjective twist on this is that, apart
from this unresolvable uncertainty about whether or not the objective models
that fit all of our objective (i.e., intersubjective) observations capture
the unobservable basis of everything that is objectively observable,
there may be a further unresolvable uncertainty about whether or not
they capture the unobservable basis of everything (or anything) that is
subjectively observable.
AI, robotics and cognitive modeling would do better to learn to live
with this uncertainty and put it in context, rather than holding out
for the un-do-able, while there's plenty of the do-able to be done.
Stevan Harnad
princeton!mind!harnad
------------------------------
Date: 12 Oct 86 19:26:35 GMT
From: well!jjacobs@lll-lcc.arpa (Jeffrey Jacobs)
Subject: Searle, AI, NLP, understanding, ducks
I. What is "understanding", or "ducking" the issue...
If it looks like a duck, swims like a duck, and
quacks like a duck, then it is *called* a duck. If you cut it open and
find that the organs are something other than a duck's, *then*
maybe it shouldn't be called a duck. What it should be called becomes
open to discussion (maybe dinner).
The same principle applies to "understanding".
If the "box" performs all of what we accept to be the defining requirements
of "understanding", such as reading and responding to the same level as
that of a "native Chinese", then it certainly has a fair claim to be
called "understanding".
Most so-called "understanding" is the result of training and
education. We are taught "procedures" to follow to
arrive at a desired result/conclusion. The primary difference between
human education and Searle's "formal procedures" is a matter
of how *well* the procedures are specified . Education is primarily a
matter of teaching "procedures", whether it be mathematics, chemistry
or creative writing. The *better* understood the field, the more "formal"
the procedures. Mathematics is very well understood, and
consists almost entirely of "formal procedures". (Mathematics
was also once considered highest form of philosophy and intellectual
attainment).
This leads to the obvious conclusion that humans do not
*understand* natural language very well. Natural language processing
via purely formal procedures has been a dismal failure.
The lack of understanding of natural languages is also empirically
demonstrable. Confusion about the meaning
of a person's words, intentions etc can be seen in every
interaction with your boss/students/teachers/spouse/parents/kids
etc etc.
"You only think you understand what I said..."
Jeffrey M. Jacobs
CONSART Systems Inc.
Technical and Managerial Consultants
P.O. Box 3016, Manhattan Beach, CA 90266
(213)376-3802
CIS:75076,2603
BIX:jeffjacobs
USENET: well!jjacobs
"It used to be considered a hoax if there *was* a man in the box..."
------------------------------
Date: 13 Oct 86 22:07:54 GMT
From: ladkin@kestrel.arpa
Subject: Re: Searle, AI, NLP, understanding, ducks
In article <1919@well.UUCP>, jjacobs@well.UUCP (Jeffrey Jacobs) writes:
> Mathematics is very well understood, and
> consists almost entirely of "formal procedures".
I infer from your comment that you're not a mathematician.
As a practicing mathematician (amongst other things), I'd
like to ask precisely what you mean by *well understood*?
And I would like to strongly disagree with your comment that
doing mathematics consists almost entirely of formal procedures.
Are you aware that one of the biggest problems in formalising
mathematics is trying to figure out what it is that
mathematicians do to prove new theorems?
Peter Ladkin
ladkin@kestrel.arpa
------------------------------
Date: 13 Oct 86 17:13:35 GMT
From: jade!entropy!cda@ucbvax.Berkeley.EDU
Subject: Re: Searle, Turing, Symbols, Categories
In article <167@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
<as one approaches the
<asymptotic Total Turing Test, any objective aspect of brain
<"performance" that anyone believes is relevant -- reaction time,
<effects of damage, effects of chemicals -- is legitimate performance
<data too, including microperformance (like pupillary dilation,
<heart-rate and perhaps even synactic transmission).
Does this mean that in order to successfully pass the Total Turing Test,
a robot will have to be able to get high on drugs? Does this imply that the
ability of the brain to respond to drugs is an integral component of
intelligence? What will Ron, Nancy, and the DOD think of this idea?
Turing said that the way to give a robot free will was to incorporate
sufficient randomness into its actions, which I'm sure the DOD won't like
either.
It seems that intelligence is not exactly the quality our government is
trying to achieve in its AI hard and software.
------------------------------
Date: Sat, 11 Oct 86 12:03:27 -0200
From: Eyal mozes <eyal%wisdom.bitnet@WISCVM.WISC.EDU>
Subject: your paper about category induction and representation
First of all, I'd like a preprint of the full paper.
Judging by the abstract, I have two main criticisms.
The first one is that I don't see your point at all about "categorical
perception". You say that "differences between reds and differences
between yellows look much smaller than equal-sized differences that
cross the red/yellow boundary". But if they look much smaller, this
means they're NOT "equal-sized"; the differences in wave-length may be
the same, but the differences in COLOR are much smaller.
Your whole theory is based on the assumption that perceptual qualities
are something physical in the outside world (e.g., that colors ARE
wave-lengths). But this is wrong. Perceptual qualities represent the
form in which we perceive external objects, and they're determined both
by external physical conditions and by the physical structure of our
sensory apparatus; thus, colors are determined both by wave-lengths and
by the physical structure of our visual system. So there's no apriori
reason to expect that equal-sized differences in wave-length will lead
to equal-sized differences in color, or to assume that deviations from
this rule must be caused by internal representations of categories. And
this seems to completely cut the grounds from under your theory.
My second criticism is that, even if "categorical perception" really
provided a base for a theory of categorization, it would be very
limited; it would apply only to categories of perceptual qualities. I
can't see how you'd apply your approach to a category such as "table",
let alone "justice".
Actually, there already exists a theory of categorization that is along
similar lines to your approach, but integrated with a detailed theory
of perception and not subject to the two criticisms above; that is the
Objectivist theory of concepts. It was presented by Ayn Rand in her
book "Introduction to Objectivist Epistemology", and by David Kelley in
his paper "A Theory of Abstraction" in Cognition and Brain Theory vol.
7 pp. 329-57 (1984); this theory was integrated with a theory of
perception, and applied to categories of perceptual qualities, and in
particular to perception of colors and of phonemes, in the second part
of David Kelley's book "The Evidence of the Senses".
Eyal Mozes
BITNET: eyal@wisdom
CSNET and ARPA: eyal%wisdom.bitnet@wiscvm.ARPA
UUCP: ...!ihnp4!talcott!WISDOM!eyal
Physical address: Department of Applied Math.
Weizmann Institute of Science
Rehovot 76100
Israel
------------------------------
End of AIList Digest
********************
∂16-Oct-86 0507 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #218
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 16 Oct 86 05:06:53 PDT
Date: Wed 15 Oct 1986 22:09-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #218
To: AIList@SRI-STRIPE
AIList Digest Thursday, 16 Oct 1986 Volume 4 : Issue 218
Today's Topics:
Queries - Lisp Machine Discussion List & PROLOG Dialects for VAX/VMS,
AI Tools - Bug in Turbo Prolog & Garbage Collection,
Seminar - Learning Apprentice Systems (UMD),
Conferences - Machine Vision &
Society for Philosophy and Psychology
----------------------------------------------------------------------
Date: Wed 15 Oct 86 14:04:23-EDT
From: Arun <Welch%OSU-20@ohio-state.ARPA>
Subject: Lisp machine discussion list
As AIList get's deluged again with lisp machine stuff, I guess it's
time to ask again, "Is there enough demand for a seperate discussion
list for lisp machines?". I asked last time this happened, and there
wasn't much reaction from the world. There are discussion groups for
the equipment from each of the major manufacturers (info-1100,
info-ti-explorer, slug, sun-spots, apollo), and even for some of the
flavors of lisp (Franz-friends, info-xlisp), but nothing for
discussing the relative merits of the different implementations of
lisp for workstations, harware qualities, maintenance, directions that
users would like to see workstations evolve towards, what things one
likes/hates in lisp programming environments, and so on. I'm willing to
work on starting up a mailing list and administer it if there is a
large enough demand. Obviously, this is an inappropriate discussion for
AIList.
...arun
Arun Welch
Lab for AI Research, Ohio State University.
{ihnp4,cbosgd}!osu-eddie!welch
welch@ohio-state.{CSNET,ARPA}
welch@red.rutgers.edu (a guest account, but mail gets to me eventually)
------------------------------
Date: Wed, 15 Oct 86 09:35 N
From: DEGROOT%HWALHW5.BITNET@WISCVM.WISC.EDU
Subject: PROLOG-dialects-info wanted for VAX/VMS
WANTED:
Information about dialects of PROLOG-implementations
for VAX/VMS, public-domain or commercial available.
Send any pointers, references and the like to:
Kees de Groot (DEGROOT@HWALHW5.BITNET)
Tel. +31-8370- .KeesdeGroot (DEGROOT@HWALHW5.BITNET) o\/o THERE AINT NO
(8)3557/ Agricultural University, Computer-centre [] SUCH THING AS
4030 Wageningen, the Netherlands .==. A FREE LUNCH!
DISCLAIMER: My opinions are my own alone and do not represent
any official position of my employer.
------------------------------
Date: Wed, 15 Oct 86 15:19:44 EDT
From: David←West%UB-MTS%UMich-MTS.Mailnet@MIT-MULTICS.ARPA
Subject: Bug in Turbo Prolog
Most criticisms of Turbo Prolog have been only flames, but the
following is, I think, an actual bug. If member is defined by:
member(H,[H|←]):-!.
member(H,[←|T]):-member(H,T).
the goal:
member([1,X],[[3,4],[1,2]]).
will succeed (binding X to 2) or FAIL if the domain of the
lowest level list elements is declared as integer or reference
integer, respectively.
It might be argued that this choice (whether or not to specify
reference) is the user's responsibility, as in Algol-like
languages; My view is that reference declarations are (like
register declarations in C) "advice to the compiler", which
should not alter the semantics of the program . This seems
reasonable because:
1) the Turbo Prolog compiler will on its own initiative retype
domains from value to reference, so it can't consider
the distinction to affect the semantics; and
2) the abovementioned goal fails ONLY if the cut is present
in the first clause of member; without this cut, Turbo
Prolog (with or without reference specified) gives the same
result as do other Prologs (for which, as expected, the
presence or absence of the cut does not affect the result).
------------------------------
Date: Tue 14 Oct 86 20:02:05-EDT
From: Arun <Welch%OSU-20@ohio-state.ARPA>
Subject: Re: Garbage Collection
>From: garren@STONY-BROOK.SCRC.SYMBOLICS.COM (Scott Garren)
>Date: 6 Oct 86 14:05:00 GMT
>
>Relative to discussions of garbage collectors I would like to point out
>that there are issues of scale involved. Many techniques that work
>admirably on an address space limited to 8 Mbytes (Xerox hardware)
>do not scale at all well to systems that support up to 1 Gbytes
>(Symbolics).
To pick a nit here, the Xerox machines are capable of addressing up to 32Mb.
Arun Welch
{ihnp4,cbosgd}!osu-eddie!welch
welch@ohio-state.{CSNET,ARPA}
welch@red.rutgers.edu (a guest account, but mail gets to me eventually)
------------------------------
Date: Tue, 14 Oct 86 13:27:33 EDT
From: SubbaRao Kambhampati <rao@cvl.umd.edu>
Subject: Seminar - Learning Apprentice Systems (UMD)
Title: Learning Apprentice Systems
Speaker: Prof. Tom Mitchell, Carnegie-Mellon University
Location: Rm. 2324 Dept of CS, U of MD, College Park
Time: 4:00pm
We consider a class of knowledge-based systems called Learning
Apprentices: systems that provide interactive aid in solving some problem,
and that automatically acquire new knowledge by observing the actions of
their users. The talk focuses on a particular Learning Apprentice, called
LEAP, which is presently being developed in the domain of digital circuit
design. LEAP is able to infer rules that characterize how to implement
classes of circuit functions, by analyzing circuit fragments contributed by
its users. The organization of LEAP suggests how similar learning
apprentices might be constructed in a variety of task domains.
(Refreshments will be served at 3:30pm in Rm. 3316)
------------------------------
Date: Wed 15 Oct 86 10:52:16-PDT
From: Sandy Pentland <PENTLAND@SRI-IU.ARPA>
Subject: Conference - Machine Vision
FINAL CALL FOR PAPERS:
Optical Society Topical Meeting on
MACHINE VISION
March 18-20, 1987
Hyatt Lake Tahoe, Incline Village, Nevada
Topics will include: 3-D vision algorithms, image understanding,
object recognition, motion analysis, feature extraction, novel
processing hardware, novel sensors, and VSLI applications. Also,
skiing.
Invited speakers include: Bob Bolles (SRI), Peter Burt (RCA),
Rodger Tsai (IBM), Demetri Terzopolis (SPAR), Rodger Dewar (Perceptron),
J. Lowrie (Martin Marietta), P. Tamura and K. Coppock (Westinghouse),
C. Jacobus (ERIM).
Program committe: Alex Pentland, Glenn Sincerbox (co-chairs),
Keith Nishihara, Harlyn Baker, Chris Goad, Steven Case, Aaron Gara,
Charles Jacobus, Timothy Strand, Richard Young.
WHAT TO SUBMIT: 25 WORD abstract and separate 4 PAGE camera-ready
summary on standard 8 1/2 x 11 paper. Summary must begin with paper
title, authors name and address, and authors should submit the original
and one copy of both the abstract and the summary. Send your paper to:
Optical Society of America
Machine Vision
1816 Jefferson Place, N.W.
Washington, D.C. 20036
DEADLINE: Nov. 3, 1986
------------------------------
Date: 11 Oct 86 04:55:29 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Society for Philosophy & Psychology: CALL FOR PAPERS
[Please post hard copy locally]
SOCIETY FOR PHILOSOPHY AND PSYCHOLOGY
Call for Papers for 1987 Annual Meeting
University of California at SAN Diego, June 21 - 23 1987
The Society for Philosophy and Psychology is calling for contributed
papers and symposium proposals for its 13th annual meeting in San Diego.
The Society consists of psychologists, philosophers, and other
cognitive scientists with common interests in the study of behavior,
cognition, language, the nervous system, artificial intelligence,
consciousness, and the foundations of psychology.
Past participants in annual meetings have included: N. Chomsky,
D. Dennett, J. Fodor, C. R. Gallistel, J. J. Gibson, S. J. Gould,
R. L. Gregory, R. J. Herrnstein, D. Hofstadter, J. jaynes, G. A. Miller,
H. Putnam, Z. Pylyshyn, W. V. Quine, R. Schank, W. Sellars and
P. Teitelbaum.
Contributed Papers are refereed and selected on the basis of quality
and relevance to both psychologists and philosophers. Psychologists,
neuroscientists, linguists, computer scientists and biologists are
encouraged to report experimental, theoretical and clinical work that
they judge to have philosophical significance.
Contributed papers are for oral presentation and should not exceed a
length of 30 minutes (about 12 double-spaced pages). The deadline for
submision is 12 January, 1987. Please send three copies to the
Program Chairman:
Professor William Bechtel
Society for Philosophy and Psychology
Department of Philosophy
Georgia State University
Atlanta GA 30303-3083
Phone: (404) 658-2277
Symposium proposals should also be sent to the above address as soon
as possible.
Local Arrangements: Professor Patricia Kitcher, B-002, Department of
Philosophy, University of California at San Diego, La Jolla CA 92093.
Individuals interested in becoming members of the Society should send
$15 membership dues ($5 for students) to Professor Kitcher at the
above address.
SPP Officers: President: Stevan Harnad (Behavioral & Brain Sciences)
President-Elect: Alvin I. Goldman (U. Arizona)
Secretary Treasurer: Patricia Kitcher (UCSD)
Program Chairman: William Bechtel (U. Georgia)
Executive Committee:
Myles Brand (U. Arizona) R. S. Jackendoff (Brandeis)
Daniel Dennett (Tufts) William Lycan (U. N. Carolina)
Fred Dretske (U. Wisconsin) John Macnamara (McGill)
Jerome A. Feldman (U. Rochester) Carolyn Ristau (Rockefeller)
Janet Fodor (CUNY) Anne Treisman (UC, Berkeley)
Alison Gopnik (U. Toronto) Robert Van Gulick (Syracuse U.)
Charles C. Wood (Yale)
------------------------------
End of AIList Digest
********************
∂16-Oct-86 0807 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #219
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 16 Oct 86 08:07:44 PDT
Date: Wed 15 Oct 1986 22:24-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #219
To: AIList@SRI-STRIPE
AIList Digest Thursday, 16 Oct 1986 Volume 4 : Issue 219
Today's Topics:
Bibliographies - Report Sources & Leff Citation Definitions &
Bibliography of AI Applications
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Report Sources
Computer and Information Science Department
University of Oregon
Eugene, OR 97403
Department of Information and Computer Science
University of California, Irvine
Irvine, CA 92717
Research Institute for Advanced Computer Science
Mail Stop 230-5, Nasa/Ames Research Center
Moffett Field, California 94035
Attention: Technical Librarian
Library Committee
Department of Computer Science
University at Buffalo (SUNY)
226 Bell Hall
Buffalo, NY 14260
Prices are U. S. / Other Countries.
Department of Computer Sciences
Technical Report Center
Taylor Hall 2.124
The University of Texas at Austin
Austin, Texas 78712-1188
CS.TECH@UTEXAS-20
Arizona State University
Computer Science Department
Engineering and Applied Sciences
Tempe, Arizona 85287
Computer Science Department
New Mexico Tech
Socorro, NM 87801
Technical Reports Librarian
Princeton University
Department of Computer Science
Princeton, NJ 08544
Computing Research Laboratory
University of Michigan
Room 2222 Electrical Engineering and Computer Science Building
Ann Arbor, Michigan 48109
Department of Computer Science and Engineering
Oregon Graduate Center
19600 N. W. von Neumann Drive
Beaverton, Oregon 97006-1999
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: defs for ai.bib37C and ai.bib40C
D BOOK50 International Symposium on Logic Programming\
%D 1984
D MAG61 Proceedings of the 1986 Symposium on Symbolic and\
Algebriaic Computation\
%D JUL 21-23 1986
D MAG60 SIGSAM Bulletin\
%V 19\
%N 3\
%D AUG 1985
D MAG62 Image and Vision Computing\
%V 4\
%N 2\
%D MAY 1986
D MAG63 International Journal of BioMedical Computing\
%V 19\
%N 1\
%D JUL 1986
D MAG64 The Computer Journal\
%V 29\
%N 3\
%D JUN 1986
D MAG65 International Journal of Man-Machine Studies\
%V 24\
%N 2\
%D FEB 1986
D MAG66 Pattern Recognition\
%V 19\
%N 3\
%D 1986
D MAG67 Robotersysteme\
%V 2\
%N 2\
%D 1986
D MAG68 Review of The Electrical Communications Laboratories\
%V 34\
%N 3\
%D MAY 1986
D MAG69 Siemens Forschungs-und Entwicklungsberichte\
%V 15\
%N 3\
%D 1986
D MAG70 Computers in Biology and Medicine\
%V 16\
%N 3\
%D 1986
D MAG71 Future Generations Computer Systems\
%V 2\
%N 1\
%D MAR 1986
D MAG72 Computers and Artificial Intelligence\
%V 4\
%N 6\
%D 1985
D MAG73 Computer Vision, Graphics, and Image Processing\
%V 32\
%N 1\
%D OCT 1985
D MAG74 Computer Vision, Graphics and Image Processing\
%V 32\
%N 2\
%D NOV 1985
D MAG75 Infor\
%V 23\
%N 4\
%D NOV 1985
D MAG76 Pattern Recognition Letters\
%V 3\
%N 5\
%D SEP 1985
D MAG77 Fuzzy Sets and Systems\
%V 17\
%N 2\
%D NOV 1985
D MAG78 Kybernetika\
%V 21\
%N 5\
%D 1985
D BOOK51 Functional Programming Languages and Computer Architecture\
%E J. P. Jouannaud\
%S Lecture Notes in Computer Science\
%V 201\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D BOOK52 Automata, Languages and Programming\
%S Lecture Notes in Computer Science\
%V 201\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D MAG79 Journal of Logic Programming\
%V 2\
%D 1985\
%N 4
D MAG80 Computer Vision, Graphics, and Image Processing\
%V 35\
%N 1\
%D JUL 1986
D MAG81 Pattern Recognition Letters\
%V 4\
%N 2\
%D APR 1986
D MAG82 International Journal of Man-Machine Studies\
%V 24\
%N 4\
%D APR 1986
D MAG83 Journal of Parallel and Distributed Computing\
%V 3\
%N 2\
%D JUN 1986
D MAG84 Cybernetics and Systems\
%V 17\
%N 1\
%D 1986
D BOOK53 Architectures and Algorithms For Digital Image Processing\
%S Proceedings of the Society of Photooptical Instrumentation Engineers\
%V 596\
%E M. J. B. Duff\
%E H. J. Siegel\
%E F. J. Corbett\
%D 1986
D MAG85 Journal of Logic Programming\
%V 3\
%D 1986\
%N 1
D BOOK54 Rewriting Techniques and Applications (Dijon 1985)\
%S Lecture Notes in Computer Science\
%V 202\
%I Springer-Verlag\
%C Berlin-Heidelberg-New York\
%D 1985
D BOOK55 Topics in the Theoretical Bases and Applicatigons of Computer Science\
%I Akad. Kiado\
%C Budapest\
%D 1986
D MAG86 Computer Vision, Graphics and Image Processing\
%V 35\
%N 23\
%D AUG 1986
D MAG87 Computers and Operations Research\
%V 13\
%N 2-3\
%D 1986
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: Bibliography of AI Applications
In response to several queries on AI applications on Engineering
appearing in this forum, I am providing bibliographies on the following areas:
1) AI applications to Electrical Engineering
2) AI applications to Mechanical and Structural Engineering
3) AI applications to other Engineering Aspects
4) AI applications to Optimization
Other sources of information are the International Journal for
Artificial Intelligence in Engineering including the news section
thereof, the Proceedings of the 1986 AAAI Workshop on Knowledge Based
Expert Systems for Engineering Design, the bibliography section of the first
paper under "EE references".
Most of these are NOT included in the bib series of AI materials coming from
the same login and appearing from time to time in AIList. Ignore the %W code.
EE references
%A D. Sriram
%T Knowledge-Based Expert Systems; Collected Papers
%I Department of Civil Engineering, Carnegie Mellon University
%C Pittsburgh, PA
%K AIME AIEE AIOE
%W 15V
%A C. Ronald Green
%A Sajjan G. Shiva
%T PECOS-An Expert Hardware Synthesis System
%J Proceedings of the Fifth International Workshop on Expert
Systems and Applications
%D 1985
%K AIEE
%W 17R
%A Hyung-Sik Park
%A Waldo C. Kobat
%T KnowPLACE: Knowledge-Based Placement of PCB's
%J Proceedings of the Fifth International Workshop on Expert
Systems and Applications
%D 1985
%K AIEE
%W 17S
%A Viviane Jonckers
%T Knowledge Based Selection and Coordination of Specialized
Algorithms
%J Proceedings of the Fifth International Workshop on Expert
Systems and Applications
%D 1985
%K AIEE
%W 17U
%A C. Delorme
%A F. Roux
%A L. Demians Archimbaud
%A M. Giambiasi
%A R. L'Bath
%A S. Mac Gee
%A R. Charroffin
%T A Functional Partitioning Expert System for Test Sequence Generation
%J Proceedings of the 22nd Design Automation Conference
%D 1985
%K AIEE
%W 20Mc
%A John Granacki
%A David Knapp
%A Alice Parker
%T The ADAM Advanced Automation System Overview, Planner and Natural Language
Interface
%J Proceedings of the 22nd Design Automation Conference
%D 1985
%K AIEE
%W 20N
%A Gotaro Odawara
%A Kazuhiko Iijima
%A Kazutoshi Wakabayashi
%T Knowledge-Based Placement Technique for Printed Wiring Boards
%J Proceedings of the 22nd Design Automation Conference
%D 1985
%K AIEE
%W 20O
%A M. Giambiasi
%A B. Mc Gee
%A R. Lbath
%A L. Demains Archimbaud
%A C. Delorme
%A P. Roux
%T An Adaptive and Evolutive Tool for Describing General Hierarchical Models,
Based on Frames and Demons
%J Proceedings of the 22nd Design Automation Conference
%D 1985
%K AIEE
%W 20P
%A Kai-Hsiung Chang
%A William G. Wee
%T A Knowledge Based Planning System for Mechanical Assembly Using Robots
%J Proceedings of the 22nd Design Automation Conference
%D 1985
%K AIEE
%W 20Q
%A Roostam Joobbani
%A Daniel P. Siewiorek
%T Weaver: A Knowledge Based Routing Expert
%J Proceedings of the 22nd Design Automation Conference
%D 1985
%K AIEE
%W 20R
%A M. A. Breuer
%A xi-an Zhu
%T A Knowledge-Based System for Selecting a Test Methodology for a PLA
%J Proceedings of the 22nd Design Automation Conference
%D 1985
%K AIEE
%W 20S
%A Tariq Samad
%A Stephen W. Director
%T Towards a Natural Language Interface for CAD
%J Proceedings of the 22nd Design Automation Conference
%D 1985
%K AIEE
%W 20T
%A Hugo J. de Man
%A I. Bolsens
%A Erik Vanden Meersch
%A Johan van CleynenBreugel
%T DIALOG: An Expert Debugging System for MOSVLSI Design
%J IEEE Transactions on Computer-Aided Design
%V CAD-4
%N 3
%D JUL 1986
%K AIEE
%W 20U
%A D. A. Lowther
%A C. M. Saldanha
%A G. Choy
%T The Application of Expert Systems to CAD in Electromagnetics
%J IEEE Transactions on Magnetics
%V MAG-21
%N 6
%D NOV 1985
%K AIEE
%W 20V
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Mechanical and Structural Engineering
%A J. S. Bennett
%T SACON: A Knowledge-based Consultant for Structural Analysis
%R HPP-78-23
%I Computer Science Department, Stanford University
%D SEP 1978
%K AIME
%W 9L
%A J. S. Bennett
%A R. Englemore
%T SACON: A Knowledge-based Consultant for Structural Analysis
%J Proceedings of the Sixth IJCAI
%P 47-49
%D 1979
%K AIME
%W need
%A R. Fjelheim
%A P. Syversen
%T An Expert System for SESAM-69 Program Selection
%R Computas Report 83-6010
%C Norway
%D 1983
%W rbm
%K AIME
%A L. A. Lopez
%A S. L. Elam
%A T. Christopherson
%T SICAD: A Prototype Implementation System for CAD
D BOOK7 Proceedings of the ASCE Third Conference on Computers in Civil
Engineering
%C San Deigo, California
%D April 1984
%P 84-93
%K AIME
%W 13XYZ
%A R. J. Melosh
%A V. Marcal
%A L. Berke
%T Structural Analysis Consultation using Artificial Intelligence
%B Research in Computerized Strucutral Analysis and Synthesis
%I NASA
%C Washington, D. C.
%D OCT 1978
%W TR13
%K AIME
%A L. A. Lopez
%A D. Rehak
%T Computer-Aided Enginering: Problems and Prospects
%R Civil Engineering System Laboratory Research Series 8
%I University of Illinois
%D July 1981
%K AIME
%W TR8
%A J. M. Rivlin
%A M. B. Hsu
%A P. V. Marcal
%T Knowledge-based Consultant for Finite Element Analysis
%R Technical Report AFWAL-TR-80-3069
%I Flight Dynamics Laboratory (FIBRA), Wright-Patterson Airforce
%D May 1980
%W need
%K AIME
%A A. D. Radford
%A P. Hung
%A J. S. Gero
%T New Rules of Thumb from Computer-Aided Structural Design:Acquiring Knowledge
for Expert Systems
%J Proceedings CADD-84
%C United Kingdom
%D 1984
%W rbm
%K AIME
%A J. S. Gero
%A A. D. Radford
%T Knowledge Engineering in Computer Graphics
%J First Australasian Conference on Computer Graphics
%C Sydney, Australia
%D Aug 31-Sep 2 1983
%K AIME
%W rbm
%P 140-143
%A D. Sriram
%A M. L. Maher
%A J. Bielak
%A S. J. Fenves
%T Expert Systems for Civil Engineering - A Survey
%R Technical Report R-82-137
%I Department of Civil Engineering, Carnegie-Mellon University
%D June 1982
%K AIME
%W ua
%A D. Sriram
%A M. Maher
%A S. Fenves
%T Applications of Expert Systems in Structural Engineering
%B Proceedings Conference on Artificial Intelligence
%C Rochestor, MI
%D APR 1983
%P 379-394
%K AIME
%W need
%A M. L. Maher
%A D. Sriram
%A S. J. Fenves
%T Tools and Techniques for Knowledge Based Expert Systems for Engineering Desig
n
%B Advances in Engineering Software
%D 1984
%K AIME
%W 13L
%A D. Rehak
%A C. Howard
%A D. Sriram
%T Architecture of an Integrated Knowledge Based Environment for Structural Engi
neering Applications
%J IFIP WG5.2 conference on Knowledge Engineering in Computer-Aided Design
%D SEP 1984
%C Budapest, Hungary
%K AIME
%W 13J
%A D. Sriram
%A M. Maher
%A S. Fenves
%T Knowledge-based Expert Systems in Structural Design
%J NASA Conference on Advances in Structures and Dynamics
%D OCT 1984
%K AIME
%W 13K
%A R. Reddy
%A D. Sriram
%A N. Tyle
%A R. Baneres
%A M. Rychener
%A S. J. Fenves
%T Knowledge-based Expert Systems for Engineering Applications
%J Proceedings IEEE International Conference on Man, Systems and Cybernetics
%D DEC 1983 - JAN 1984
%C India
%K AIME
%W 13I
%A J. S. Bennett
%A R. S. Engelmore
%T SACON: A Knowledge Based Consultant for Structural Analysis
%J Proceedings Sixth IJCAI
%P 47-49
%D 1979
%W 12H
%K AIME
%A D. Sriram
%T A Bibliography on Knowledge-Based Expert Systems in Engineering
%J SIGART
%P 32-40
%D JUL 1984
%W 12I
%K AIME
%A H. Yoshiura
%A Kikuo Fujimura
%A T. L. Kunii
%T Top-Down Construction of 3-D Mechanical Object Shapes from Engineering Drawin
gs
%J COMPUTER
%D December 1984
%P 32-40
%K AIME
%W 14D
%A D. C. Brown
%A B. Chandrasekaran
%T Expert Systems for a Class of Mechanical Design Activity
%J IFIP WG5.2 Working Conference on Knowledge Engineering in Computer Aided Desi
gn
%C Budapest, Hungary
%D SEP 1984
%W 14L
%K AIME
%A D. C. Brown
%T Capturing Mechanical Design Knowledge
%I Computer Science Department
%C Worcester, Massachussetts
%W 14M
%K AIME
%A J. S. Arora
%A G. Baenziger
%T Uses of Artificial Intelligence in Design Optimization
%J Computer Methods in Mechanics and Engineering
%V 54
%N 3
%D MAR 1986
%P 303-324
%K AIME OPT
%A D. Sriram
%T Knowledge-Based Expert Systems; Collected Papers
%I Department of Civil Engineering, Carnegie Mellon University
%C Pittsburgh, PA
%K AIME AIEE AIOE
%W 15V
%A D. Sriram
%A S. J. Fenves
%T Destiny: A Knowledge-Based Approach to Integrated Structural
Design
%I Department of Civil Engineering, Carnegie Mellon University
%C Pittsburgh, PA
%K AIME
%W 15W
%A T. A. Nguyen
%A W. A. Perkins
%A T. J. Laffey
%T Application of LES to Advanced Design Verification
%I Lockheed Research and Development
%K AIME
%W 16P
%A H. L. LI
%A P. Papalambros
%T A Production
System for Use of Global Optimization Knowledge
%J JOMTAD
%V 107
%D JUN 1985
%P 277-284
%W 18H
%K AIME OPT
%A J. W. Hou
%T Shape Optimization of Elastic Hollow Bars
%J JOMTAD
%V 107
%D MAR 1985
%P 100-105
%W 18I
%K AIME SO
%A Hitoshi Furuta
%A King-Sun Tu
%A James T. P. Yao
%T Structural Engineering Applications of Expert Systems
%J CAD
%V 17
%N 9
%D NOV 1985
%P 410-419
%K AIME
%W 19Mc
%A Mary Lou Maher
%T Hi-Rise and Beyond: Directions for Expert Sytems in Design
%J CAD
%V 17
%N 9
%D NOV 1985
%P 420-426
%K AIME
%W 19N
%A A. D. Radford
%A J. S. Gero
%T Towards Generative Expert Systems for Architectuaral Detailing
%J CAD
%V 17
%N 9
%D NOV 1985
%P 428-434
%K AIME
%W 19O
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Other Engineering Fields not included in the above
%A D. Sriram
%T Knowledge-Based Expert Systems; Collected Papers
%I Department of Civil Engineering, Carnegie Mellon University
%C Pittsburgh, PA
%K AIME AIEE AIOE
%W 15V
%A Mihai Barbuceanu
%T A Domain Independent Architecture for Design Problem Solving
%J Proceedings of the Fifth International Workshop on Expert
Systems and Applications
%D 1985
%K AIOE
%W 17E
%A Mihai Barbuceanu
%T An Object-Centered Framework for Expert Systems in Computer-Aided
Design
%B Knowledge Engineering in CAD
%I North Holland
%E S. Gero
%D 1985
%P 223-253
%K AIOE
%W 17F
%A Roland Rehmart
%A Kristian Sandohl
%A Olaf Granstedt
%T Knowledge Organization in an Expert Sysem for Spot-Welding Robot
Configuration
%J Proceedings of the Fifth International Workshop on Expert
Systems and Applications
%D 1985
%K AIOE
%W 17V
%A Toshinori Watanabe
%A Yoshiaki Nagai
%A Chizuko Yasunobu
%A Koji Sasaki
%A Toshiro Yamanaka
%T An Expert System for Computer Room Facility Layouts
%J Proceedings of the Fifth International Workshop on Expert
Systems and Applications
%D 1985
%K AIOE
%W 17W
%A Larry F. Huggins
%A John R. Barrett
%A Don D. Jones
%T Expert Systems: Concepts and Opportunities
%J Agricultural Engineering
%D JAN/FEB 1986
%V 67
%N 1
%P 21-24
%K AIOE
%A Fabian C. Hadipriono
%A Hock-Siew Toh
%T Approximate Reasoning Models for Consequences on Structural
Component Due to Failure Events
%K AIOE
%W 19F
%A Michael Al. Rosenman
%A John S. Gero
%T Design Codes as Expert Systems
%J CAD
%V 17
%N 9
%D NOV 1985
%P 399-409
%K AIOE
%W 19G
%A David C. Browne
%T Failure Handling in a Design Expert System
%J CAD
%V 17
%N 9
%D NOV 1985
%P 436-441
%K AIOE
%W 19R
%A Michael J. Pazzani
%T Refining the Knowledge Base of a Diagnostic Expert System:
An Application of Failure-Driven Learning
%I The Aerospace Corporation
%K AIOE
%W 20B
%A Donald E. Brown
%A Chlesea C. White, III
%T An Expert System Approach to Boiler Design
%I Department of Systems, Engineering, University of Virginia
%K AIOE
%W 20C
%A Ernest Davis
%T Conflicting Requirements in Reasoning About Solid Objects
%K AIOE
%W 20D
%A Daniel R. Rehak
%A H. Craig Howard
%T Interfacing Expert Systems with Design Databases in Integrated CAD
Systems
%J CAD
%V 17
%N 9
%D NOV 1985
%P 443-454
%W 20E
%K AIOE
%A Pual A. Fishwick
%T The Role of Process Abstraction in Simulation
%I Department of Computer and Information Science, University of Pennsylvania
%K AIOE
%W 20F
%A Mark Wynot
%T Artificial Intelligence Provides Real-Time Control of DEC's Material
Handling Process
%J IE
%D APR 1986
%P 34+
%K AIOE
%W 20G
%A Jeannette M. Wing
%A Farhad Arbab
%T Geometric Reasoning: A New Paradigm for Processing Geometric Information
%I Department of Computer Science, Carnegie-Mellon University
%K AIOE
%W 20K
%A T. J. Grant
%T Lessons for OR from AI: A Scheduling Case Study
%J J. Opl Res. Soc
%V 37
%N 1
%P 41-57
%D 1986
%W 20L
%K AIOE
%A Richard S. Shirley
%A David A. Fortin
%T Developing an Expert System for Process Fault Detection and Analysis
%J Intech
%P 51-58
%D APR 1986
%K AIOE
%W 20M
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Applications of AI to Optimization
%A Jasbir S. Arora
%A G. Baenziger
%T Uses of Artificial Intelligence in Design Optimization
%J Computer Methods in Applied Mechanics and Engineering
%V 54
%D 1986
%P 303-323
%K AI OPT
%W 19C
%A S. Azarm
%A P. Papalambros
%T A Case for a Knowledge-Based Active Set Strategy
%J JOMTAD
%D MAR 1984
%P 77-81
%V 106
%K OPT AI
%W 19H
%A Alice M. Agogino
%A Ann S. Almgren
%T Symbolic Computation in Computer-Aided Optimal Design
%I Department of Mechanical Engineering, University of California, Berkeley
%D JUL 10, 1986
%K OPT AI
%W 20I
------------------------------
End of AIList Digest
********************
∂17-Oct-86 0045 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #220
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 17 Oct 86 00:45:06 PDT
Date: Thu 16 Oct 1986 22:09-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #220
To: AIList@SRI-STRIPE
AIList Digest Friday, 17 Oct 1986 Volume 4 : Issue 220
Today's Topics:
Bibliography - Leff Bibliography Continuation #1
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Bibliography (continued)
%A V. F. Shangin
%T Industrial Robots for Miniature Parts
%I Mashinostroenie
%C Moscow
%D 1985
%K AT15 AI07 AA26
%A E. P. Popov
%A E. I. Yurevich
%T Robotics Engineering
%I Mashinostroenie
%C Leningrad
%D 1984
%K AT15 AI07 AA26
%A L. S. Yampolskii
%T Industrial Robotics
%I Tekhnika
%C Kiev
%D 1984
%K AT15 AI07 AA26
%A Thaddeus J. Kowalski
%T An Artificial Intelligence Approach to VLSI Design
%I Kluwer Academic Publishers
%C Norwell, MA
%D 1985
%K AA04 AT15
%X 238 pages $34.95 ISBN 0-89838-169-X
%A Rostam Joobbani
%T An Artificial Intelligence Approach to VLSI Routing
%I Kluwer Academic Publishers
%C Norwell, MA
%D 1985
%K AA04 AT15
%X 192 pages ISBN 0-89838-205-x $34.50
%A Narinder Pal Singh
%T An Artificial Intelligence Approach to Test Generation
%I Kluwer Academic Publishers
%C Norwell, MA
%K AA04 AT15
%X forthcoming, price and availability to be announced
%A Anita Tailor
%A Alan Cross
%A David Hogg
%A David Mason
%T Knowledge Based interpretation of Remotely Sensed
Images
%J MAG62
%P 67-83
%K AI01 AI06
%A Tom Henderson
%A Ashok Samal
%T Multiconstraint Shape Analysis
%J MAG62
%P 84-96
%K AI01 AI06
%A Robert Schalkoff
%T Automated Reasoning About Image Motion Using
A Rule-based Deduction System
%J MAG62
%P 97-107
%K AI01 AI06
%A B. C. Vemun
%A A. Mitiche
%A J. K. Aggarwal
%T Curvature-based Representation of Objects from Range
Data
%J MAG62
%P 107-115
%K AI01 AI06
%A J. A. D. W. Anderson
%A K. D. Baker
%A G. D. Sullivan
%T 'Model': A POPLOG Package to Support Model-based
Vision'
%J MAG62
%P 115
%K AI01 AI06 T02
%A J. G. Llaurado
%T Adapting Robotics More and More to Biology
%J MAG63
%P 3-8
%K AI07 AA10
%A S. R. Ray
%A W. D. Lee
%A C. D. Morgan
%A W. Airth-Kindree
%T Computer Sleep Stage Scoring - An Expert System
Approach
%J MAG63
%P 43-61
%K AI01 AA10 AA11
%A I. D. Craig
%T The Ariadne-1 Blackboard System
%J MAG64
%P 235-240
%K AI01
%A G. Papakonstantinou
%A J. Kontos
%T Knowledge Representation with Attribute Grammars
%J MAG64
%P 241-246
%K AI16
%A M. H. Williams
%A G. Chen
%T Translating Pascal for Execution on a Prolog-based System
%J MAG64
%P 246-252
%K AI16
%A G. W. Smith
%A J. B. H. du Boulay
%T The Generation of Cryptic Crossword Clues
%J MAG64
%P 282
%K AA17
%A Y. Ishida
%A N. Adachi
%A H. Tokumaru
%T An Analysis of Self-Diagnosis Model by Conditional Fault Set
%J International Journal of Computer and Information
Sciences
%V 14
%N 5
%D OCT 1985
%P 243-260
%A B. R. Gaines
%A M. L. G. Shaws
%T Foundations of Dialog Engineering: The Development of
Human-Computer INteraction. Part II
%J MAG65
%P 101-124
%K AI02 AA15
%A P. Shoval
%T Comparison of Decision Support Strategies in Expert Consultation
Systems
%J MAG65
%P 125-140
%K AI01 AI13
%A S. Gottwald
%A W. Pedrycz
%T On the Suitability of Fuzzy Models: An Evaluation Through
Fuzzy Integrals
%J MAG65
%P 141-152
%K O04
%A A. L. Kamouri
%A J. Kamouri
%A K. H. Smith
%T Training by Exploration: Facilitating the Transfer of
Procedural Knowledge Through Analogical Reasoning
%J MAG65
%P 171
%K AI04 AI16
%A C. Vallet
%A J. Chastang
%A J. D. Huet
%T Partial Self-Reference in Models of Natural Systems and
Spatiotemporal Reference Insufficiency of Physicians (French)
%J Cybernetica
%V 29
%N 2
%D 1986
%P 145-160
%K AA01 AI08
%A Y. F. Wang
%A J. K. Aggarwal
%T Surface Reconstruction and Representation of 3-D Scenes
%J MAG66
%P 197-208
%K AI06
%A A. C. Bovik
%A t. S. Huang
%A D. C. Munson, Jr.
%T Nonparametric tests for Edge Detection in Noise
%J MAG66
%P 209-220
%K AI06
%A J. Katajainen
%A O. Nevalainen
%T Computing Relative Neighborhood Graphs in the Plane
%J MAG66
%P 221-228
%K AI06
%A X. Li
%A R. C. Dubes
%T Tree Classifier Design with a Permuation Statistic
%J MAG66
%P 229-236
%K AI06
%A M. Yamashita
%A T. Ibaraki
%T Distances Defined by Neighborhood Sequences
%J MAG66
%P 237
%K AI06
%A Robert Michaelsen
%A Donald Michie
%T Prudent Expert Systems Applications can Provide a
Competititve Weopon
%J Data Management
%V 24
%N 7
%D JUL 1986
%P 30-35
%K AI01 AA06
%A J. W. Park
%T An Efficient Memory System for Image Processing
%J IEEE Transactions on Computers
%V 35
%N 37
%D JUL 1986
%P 669
%K AI06
%A Minoru Abe
%A Makoto Kaneko
%A Kazuo Tanie
%T Study on Hexapod Walking Machine using an Approximate
Straight-line Mechanism
%J Journal of Mechanical Engineering Laboratory
%V 40
%N 3
%D MAY 1986
%P 111-124
%K AI07 GA01
%A Tatsuo Arai
%A Eiji Nakano
%A Tomoaki Yano
%A Ryoichi Hashimoto
%T Hybrid Control System for a Manipulator and its Application
%J Journal of Mechanical Engineering Laboratory
%V 40
%N 3
%D MAY 1986
%P 133
%K AI07 GA01
%A H. Heiss
%T Fundamentals About the Transformation of Coordinates for Robots (German)
%J MAG67
%P 65-72
%K AI07
%X German
%A P. Rojek
%A J. Olomski
%T Fast Coordinate Transformation and Processing of Command Signals for
Continuous Path Robot Control
%J MAG67
%P 73-82
%K AI07
%X (German)
%A T. Tsumura
%T Recent Developments of Automated Guided Vehicles in Japan
%J MAG67
%P 91-98
%K GA01 AI07
%X (german)
%A G. W. Kohler
%T Mechanical Master-Slave Manipulators
%J MAG67
%P 99-104
%K AI07
%X (german)
%A G. Pritschow
%A K. H. Hurst
%T Design of Industrial Robots with Modular Components
%J MAG67
%P 105-110
%K AI07
%X (german)
%A T. Friedmann
%T Robots in the Automotive Industry
%J MAG67
%P 111-119
%K AI07 AA26
%X (german)
%A R. Backmann
%T Optoelectronic Sensors Sensoroptics - Some Basic
Considerations for the Selection of Optical Sensor
Components for Textile Identification
%J MAG67
%P 120
%K AI06
%X (german)
%A N. Sugamura
%A S. Furui
%T Speaker-Dependent Large Vocabulary Word Recognition Using the
SPLIT Method
%J MAG68
%P 327-334
%K AI05 GA01
%A K. Aikawa
%A K. Shikano
%T Spoken Word Recognition Using Spectrum and Power
%J MAG68
%P 343-348
%K AI05 GA01
%A M. Sugiyama
%A K. Shikano
%T Unsupervised Learning Algorithm for Vowel Templates Based on
Minimum Quantization Distortion
%J MAG68
%P 357-362
%K AI05 GA01
%A E. Berti
%T Forms of Rationality and the Future of Human Intelligence in
the New Technological Age
%J L'Elettrotcnica
%V 73
%N 4
%D APR 1986
%K AI08 O05
%X (in itialian)
%A R. Muller
%A G. Horner
%T Chemosensors with Pattern Recognition
%J MAG69
%P 95-100
%K AI06
%A H. Fritz
%A P. Wurll
%T Tactile Force-torque Sensor for Performing Control Tasks in
Robotics
%J MAG69
%P 120-125
%K AI07
%A K. C. O'Kane
%T An Expert Systems Facility for Mumps
%J MAG70
%P 205-214
%K AA01 AI01 T03
%X [Mumps is an integrated language/database system often used
in the medical records field. I heard Dr O'Kane speak on this
work and he believed that such a system would allow expert systems
to share clinical data with existing MIS systems in hospitals and
make their introduction more practicable. - LEFF]
%A H. Mansour
%A M. E. Molitch
%T A New Strategy for Clinical Decision Making: Censors and
Neuroendocrinological Diseases
%J MAG70
%P 215-222
%K AA01 AI01
%A Harold Stone
%A Paolo Sipala
%T Average Complexity of Depth-first Search with Backtracking and
Cutoff
%J IBM J. Res. and Dev.
%V 30
%N 3
%D MAY 1986
%P 242-258
%K AI03
%T Artificial Intelligence - Wise Guys Wire Ships
%J Marine Engineering Log
%V 91
%N 6
%D JUN 1986
%P 119-122
%K AA18 AA05
%A Robert Cartwright
%T A Practical Formal Semantic Definition and Verification
System for TYPED LISP
%D 1975
%I Garland Publishing
%C New York, New York
%K AT15 T01 AA08
%X Distinguished Dissertation System ISBN 0-8240-4420-7
%A Cordell Green
%T The Application of Theorem Proving to Question-Answering Systems
%D 1969
%I Garland Publishing
%C New York, New York
%K AT15 AI11 AA08
%X Distinguished Dissertation System ISBN 0-8240-4415-0 $18.00
%A James Richard Meehan
%T The Metanovel: Writing Stories by Computer
%D 1976
%I Garland Publishing
%C New York, New York
%K AT15 AI02
%X Distinguished Dissertation System ISBN 0-8240-4409-6 $18.00
%A Robert C. Moore
%T Reasoning from Incomplete Knowledge in a Procedural Deduction
System
%D 1975
%I Garland Publishing
%C New York, New York
%K AT15 AI09 planner
%X Distinguished Dissertation System ISBN 0-8240-4403-7 $13.00
%A Susan Speer Owicki
%T Axiomatic Proof Techiques for Parallel Programs
%D 1975
%I Garland Publishing
%C New York, New York
%K AT15 AA08 AI11
%X Distinguished Dissertation System ISBN 0-8240-4413-4 $20.00
%A Norihisa Suzuki
%T Automatic Verification of Programs with Complex Data Structures
%D 1976
%I Garland Publishing
%C New York, New York
%K AT15 AA08 AI11
%X Distinguished Dissertation System ISBN 0-8240-4425-8 $19.00
%A Robert Wilensky
%T Understanding Goal-Based Stories
%D 1978
%I Garland Publishing
%C New York, New York
%K AT15 AI02 PAM
%X Distinguished Dissertation System ISBN 0-8240-4410-X $31.00
%A William A. Woods
%T Semantics For a Question-Answering System
%D 1967
%I Garland Publishing
%C New York, New York
%K AT15 AI02
%X Distinguished Dissertation System ISBN 0-8240-4405-3 $28.00
%A Stephen J. Andriole
%T Applications in Artificial Intelligence
%I Petrocelli Books
%C Princeton, NJ
%K AT15 AI07 AI02 AI01 AA18
%X $49.95
%A T. Gergely
%T Cuttable Formulas for Logic Programming
%J BOOK50
%P 299-310
%K AI10
%A Maria Virginia Aponte
%A Jose Alberte Fernandez
%A Philippe Roussel
%T Editing First Order Proofs; Programmed Rules vs. Derived Rules
%J BOOK50
%P 92-98
%K AI01 AI10 AI11
%A Hellfried Bottger
%T Automatic Theorem Proving with Configurations
%J Elektron. Informationsverarb. Kybernet.
%V 21
%D 1985
%N 10-11
%P 523-546
%A G. Cedervall
%T Robots for Definite Routine Analysis
%J Kemisk Tidskrift
%V 98
%N 4
%D APR 1986
%P 73-75
%K AI07
%X in Swedish
%A Brian Harvey
%T Computer Science Logo Style
Volume 2: Projects, Styles and Techniques
%I MIT Press
%C Cambridge, Mass
%D 1986
%K AT15
%A Daniel N. Osherson
%A Michael Stob
%A Scott Weinstein
%T Systems that Learn: An Introduction to Learning Theory
for Cognitive and Computer Scientists
%I MIT PRESS
%C Cambridge, Mass
%D 1986
%K AT15 AI04 AI08
%A K. H. Narjes
%T Perspectives for European Cooperation
%J MAG71
%P 13
%K GA03
%A M. Carpentier
%T Community Strategy in Information Technology and Telecommunications
%J MAG71
%P 19
%K AA08
%A M. Aigarain
%T The Technological Perspective
%J MAG71
%P 23
%K AI16
%A R. W. Wilmot
%T The Market Perspective
%J MAG71
%P 27
%K AT04
%A W. Dekker
%T Issues Basic to the Development of a European Information Technology
%J MAG71
%P 33
%K GA03
%A M. Gagao
%T Cooperative R&D of Information Technologies Between the Government and
Private Sector in Japan
%J MAG71
%P 39
%K GA01
%A P. F. Smidt
%T U. S. Industrial Cooperation in R&D
%J MAG71
%P 45
%K GA03
%A J. M. Cadiou
%T ESPRIT in Action
%J MAG71
%P 51
%K GA03
%A F. F. Kuo
%T A Return Visit to ICOT
%J MAG71
%P 61
%K GA01
%T Network Support of Supercomputers: Conference Report.
%J MAG71
%P 65
%K H04
%A W. J. Rapaport
%T Philosophy, Artificial Intelligence, and the Chinese-Room Argument
%J Abacus
%V 3
%N 4
%D Summer 1986
%K AI16
%A D. I. Shapiro
%T A Model for Decision Making under Fuzzy Conditions
%J MAG72
%P 481
%K O04 AI13 AI08
%A G. Agre
%T An Implementation of the Expert System DIGS for Diagnostics
%J MAG72
%P 495
%K AA21 AI01
%A J. Hromkovic
%T On One-Way Two-Headed Deterministic Finite State Automata
%J MAG72
%P 503
%K AI16
%A E. Braunsteinerova
%T Operating Alphabet Complexity of Homogenous Trellis Automata and Symmetric
Functions
%J MAG72
%P 527
%A I. Plander
%T Projects of the New Generation Computer Systems and Informatics
%J MAG72
%P 551
%A Julian Hewett
%A Ron Sasson
%T Expert Systems 1986, volume 1 --USA and Canada
%I Ovum Limited
%C London
%K AT15 AI01 GA02 GA04
%A Philip Klahr
%A Donald A. Waterman
%T Expert Systems, Techniques, Tools and Applications
%I Addison-Wesley
%K AT15 Rand AI01
%A Michael Brady
%A Jean Ponce
%A Alan Yuille
%A Haruo Asada
%T Describing Surfaces
%J MAG73
%P 1-28
%K AI06
%A Irving Biederman
%T Human Image Image Understanding: Recent Research and a Theory
%J MAG73
%P 29-73
%K AI06 AI08 AT08 AI16
%A Steven W. Zucker
%T Early Orientation Selection: Tangent Fields and the Dimensionality of Support
%J MAG73
%P 74-103
%K AI06
%A Martin D. Levine
%A Ahmed M. Nazif
%T Rule-Based Image Segmentation: A Dynamic Control Strategy Approach
%J MAG73
%P 104-126
%K AI01 AI06
%A M. J. Magee
%A J. K. Aggarwal
%T Using Multisensory Images to Dervie the Structure of Three-Dimensional Object
s -
A Review
%J MAG74
%P 145-157
%K AI06 AT08
%A Edgar A. Cohen
%T Generalized Sloped Facet Models Useful in Multispectral Image Analysis
%J MAG74
%P 171-190
%K AI06
%A A. Lashas
%A R. Shurna
%A A. Verikas
%A A. Dosinas
%T Optical Character Recognition Based on Analog Preprocessing and Automatic Fea
ture
Extraction
%J MAG74
%P 191-207
%K AI06
%A John E. Wampler
%T Enhancing Real-Time Perception of Quantum Limited Images from a Doubly
Intensified SIT Camera System
%J MAG74
%P 208-220
%K AI06
%A T. Y. Kong
%A A. W. Roscoe
%T A Theory of Binary Digital Pictures
%J MAG74
%P 221-243
%K AI06
%A Ron Gershon
%T Aspects of Perception and Computation in Color Vision
%J MAG74
%P 244
%K AI06
%A I. G. Biba
%T The adaptation of an Action-Planning System to Accomodate Problem Classes
%J Cybernetics
%V 21
%N 2
%D MAR-APR 1985
%P 242-253
%K AI09
%A Jaroslav Opatrny
%T Parallel Programming Constructs for Divide-and-Conquer, and
Branch and Bound Paradigms
%J MAG75
%P 403-416
%K AI03
%A H. I. El-Zorkany
%T Robot Programming
%J MAG75
%P 430-446
%K AI07
%A David Butler
%T Experience Using Artificial Intelligence
%J Data Processing
%V 27
%N 9
%D NOV 1985
%P 64
%K AI16
%A J. P. Keating
%A R. L. Mason
%T Some Practical Aspects of Covariance Estimation
%J MAG76
%P 295-294
%K AI06
%A M. Krivanek
%T An Application of Limited Branching in Clustering
%J MAG76
%P 299-302
%K O06
%A W. Pedrycz
%T Classification in a Fuzzy Environment
%J MAG76
%P 303-308
%K O04
%A T. Kohonen
%T Median Strings
%J MAG76
%P 309-314
%K O06
%A W. G. Korpatsch
%T A Pyramid that Grows by Powers of Two
%J MAG76
%P 315-322
%K AI06 H03
%A C. Ronse
%T A Simple Proof of Rosenfeld's Characterization of Digital Straight Line Segme
nts
%J MAG76
%P 323-326
%K AI06
%A I. K. Sethi
%T A Genral Scheme for Discontinuity Detection
%J MAG76
%P 327-334
%K AI06
%A O. Skliar
%A M. H. Loew
%T A New Method for Chracterization of Shape
%J MAG76
%K AI06
%A F. C. A. Groen
%A A. C. Sanderson
%A J. F. Schlag
%T Symbol Recognition in Electrical Diagrams Using Probabilistic Graph Matching
%J MAG76
%K AI06 O06 AA04
%A Z. Pinjo
%A D. Cyganski
%A J. A. Orr
%T Determination of 3-D Object Orientation From Projections
%J MAG76
%K AI06
%A A. Bookstein
%A K. K. Ng
%T A Parametric Fuzzy Set Prediction Model
%J MAG77
%P 131-142
%K O04
%A W. L. Chen
%A R. J. Guo
%A L. S. Shang
%A T. Ji
%T Fuzzy Match and Floating Threshold Strategy for Expert System in Traditional
Chinese Medicine
%J MAG77
%P 143-152
%K O04 AI01 AA01
%A D. G. Schwartz
%T The Case for an Interval-based Representation of Linguistic Truth
%J MAG77
%P 153-166
%K O04 AI02
%A L. O. Holl
%A A. Kandel
%T Studies in Possibilistic Recognition
%J MAG77
%P 153-166
%K O04 AI06
%A M. Togai
%T A Fuzzy Inverse Relation Based on Godelian Logic and its Applications
%J MAG77
%P 211
%K O04
%A B. F. Buxton
%A H. Buxton
%A D. W. Murray
%A N. S. Williams
%T Machine Perception of Visual Motion
%J GEC Journal of Research
%V 3
%N 3
%D 1985
%P 145-161
%K AI06
%A M. S. Wilson
%T An Evaluation of Manoeuvre Detector Algorithms
%J GEC JOurnal of Research
%V 3
%N 3
%D 1985
%P 181-190
%K AI06 AA18
%A J. M. Schurick
%A B. H. Williges
%A J. F. Maynard
%T User Feedback Requirements with Automatic Speech Recognition
%J Ergonomics
%V 28
%N 11
%D NOV 1985
%K AI05 O01
%A R. Beg
%T Image-Processing System Serves a Variety of Buses
%J Computer Design
%V 24
%N 16
%D NOV 15, 1985
%P 99
%K AI06 AT02
%A J. Zlatuska
%T Normal Forms in the Typed Lambda-Calculus with Tuple Types
%J MAG78
%P 366=381
%K T01
%A Osamu Furukawa
%A Syohei Ishizu
%T An Expert System for Adaptive Quality Control
%J International Journal of General Systems
%P 183-200
%V 11
%N 3
%D 1985
%K AA05 AI01
%A P. Carnevali
%A L. Coletti
%A S. Patarnello
%T Image Processing by Simulated Annealing
%J IBM Journal of Research and Development
%V 29
%N 6
%P 569-579
%D NOV 1985
%K AI06 AI03
%A Michael Cross
%T Down on th Automatic Farm
%J New Scientist
%V 108
%N 1483
%D NOV 21 1985
%P 56
%K AI07 AA23
%A S. Abramsky
%A R. sykes
%T Secd-m- a Virtual Machine for Applicative Programming
%B BOOK51
%P 81-98
%A C. L. Hankin
%A P.E. Osmon
%A M. J. Shute
%T COBWEB- A Combinator Reduction Architecture
%B BOOK51
%P 99-112
%A P. Wadler
%T How to Replace Failure by a List of Successes - A Method for Exception
Handling, Backtracking, and Pattern Matching in Lazy Functional Languages
%B BOOK51
%P 113-128
%K AI03 AI10
%A J. Hughes
%T Lazy Memo-Functions
%B BOOK51
%P 129-146
%A T. Johnsson
%T Lambda-Lifting -0 Transforming Programs to Recursive Equations
%B BOOK51
%P 190-203
%A S. K. Debray
%T Optimizing Almost-Tail-Recursive Prolog Programs
%B BOOK51
%P 204-219
%K T02
%A D. Patel
%A M. Schlag
%A M. Ercegovac
%T vFP - an Environment for the Multi-Level specification, Analysis and
Synthesis of Hardware Algorithms
%B BOOK51
%P 238-255
%K AA08 AA04
%A J. Hughes
%T A Distributed Garbage Collection Algorithm
%B BOOK51
%P 256-272
%K T01 H03
%A D. R. Brownbridge
%T Cyclic Reference Counting for Combinator Machines
%B BOOK51
%P 273-288
%K T01 H03
%A D. S. Wise
%T Design for a Multiprocessing Heap with On-Board Reference Couting
%B BOOK51
%P 289-304
%K T01 H03
%A P. Dybjer
%T Program Verification in a Logical Theory of Constructions
%B BOOK51
%P 334-349
%K AA08
%A L. Augustsson
%T Compiling Pattern Matching
%B BOOK51
%P 368-381
%K O06
%A Mark Jerrum
%T Random Generation of Combinatorial Structures for a Uniform Distribution
(extended abstract)
%B BOOK52
%P 290-299
%K O06
%A D. Kapur
%A P. Narendran
%A G. Sivakumar
%T A Path Reordering for Proving Termination of Term Rewriting Systems
%B BOOK47
%P 173-187
%K AI14
%A Deepak Kapur
%A Mandayam Srivas
%T A Rewrite Rule Based Approach for Synthesizing Abstract Data Types
%B BOOK47
%P 188-207
%K AA14 AA08
%A Valentinas Kriauciukas
%T A Tree-matching Algorithm
%J Mat. Logika Primenen No. 1
%D 1981
%P 21-32
%K O06
%X Russian. English and Lithuanian Summaries
%A Aida Pliuskeviciene
%T Specification of cut-type Rules in Programming Logics with Recursion
%J Mat. Logika Primenen No. 1
%D 1981
%P 33-60
%K AI10
%A C. Choppy
%T A LISP Compiler for FP Language and its Proof via Algebraic Semantics
%B BOOK47
%P 403-415
%K T01 AA08
%A Michael J. Corinthios
%T 3D Cellular Arrays for Parallel/Cascade Image/Signal Processing
%B Spectral Techniques and Fault Detection
%P 217-298
%S Notes Rep. Comput. Sci. Appl. Math
%V 11
%I Academic Press
%C Orlando, Fla
%D 1985
%K AI06 H03
%A D. O. Avetisyan
%T The Probabilistic Approach to Construction of Intelligent Systems
%J Mat. Voprosy Kibernet. Vychisl. Tekhn NO. 13
%D 1984
%P 5-21
%V 13
%K AI16
%X (Russian with Armenian Summary)
%A Robert S. Boyer
%A J. Strother Moore
%T A Mechanical Proof of the Unsolvability of the Halting Problem
%J JACM
%V 31
%D 1984
%N 3
%P 441-458
%K AI11
%A N. A. Chuzhanova
%T Grammatical Method of Synthesis of Programs
%J Vychisl. Sistemy No. 102
%D 1984
%P 32-42
%N 102
%K AA08
%X russian
%A S. M. Efimova
%T Pi-Graphs for Knowledge Representation
%I Akad. Nauk SSSR, Vychisl. Tsentr, Moscow
%D 1985
%K AI16
%X Russian
%A Melvin Fitting
%T A Kripke-Kleene Semantics for Logic Programs
%J MAG79
%P 295-312
%K AI10
%A D. M. gabbay
%T N-Prolog: an Extension of PROLOG with Hypothetical Implication
II. Logical Foundations, and Negation as Failure
%J MAG79
%P 251-283
%K T02
%A Le Van Tu
%T Negation-as-failure rule for General Logic Programs with Equality
%J MAG79
%P 285-294
%K AI10
%A Zohar Manna
%A Richard Waldinger
%T Special Relations in Automated Deduction
%B BOOK52
%P 413-423
%K AI11
%A Jack Minker
%A Donald Perlis
%T Computing Protected Circumscription
%J MAG79
%P 235-249
%K AI11 AI15
%A J. A. Makowsky
%T Why Horn Formulas Matter in Computer Science: Initial Structures and
Generic Examples (extended abstract)
%B BOOK47
%P 188-207
%K AI10
%A Xu Hua Liu
%T The Input Semicancellation Resolution Principle on Horn Sets
%J Kexue Tongbao
%V 30
%D 1985
%N 16
%P 1201-1202
%K AI10 AI11
%X in chinese
------------------------------
End of AIList Digest
********************
∂17-Oct-86 0308 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #221
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 17 Oct 86 03:08:28 PDT
Date: Thu 16 Oct 1986 22:14-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #221
To: AIList@SRI-STRIPE
AIList Digest Friday, 17 Oct 1986 Volume 4 : Issue 221
Today's Topics:
Bibliography - Leff Bibliography Continuation #2
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Bibliography (continued)
%A R. Neches
%A P. Langley
%A D. Klahr
%T Learning, Development and Production Systems
%I Department of Information and Computer Science, University of California,
Irvine
%D JAN 1986
%R 86-01
%K AI01 AI04
%X 46 pages ($4.00)
%A R. P. Hall
%T Understanding Analogical Reasoning: Computational Approaches
%D MAY 1986
%I Department of Information and Computer Science, University of California,
Irvine
%R 86-11
%K AI04 AT09
%X 60 pages ($5.00)
%A P. Langley
%A J. G. Carbonell
%T Language Acquisition and Machine Learning
%D JUN 1986
%I Department of Information and Computer Science, University of California,
Irvine
%R 86-12
%K AI02 AI04
%X 41 pages $3.00
%A J. C. Schlimmer
%T A Note on Correlational Measures
%D MAY 1986
%I Department of Information and Computer Science, University of California,
Irvine
%R 86-13
%X determining the degree that two events are interrelated
14 pages $2.00
%A B. Nordhausen
%T Conceptual Clustering Using Relational Information
%D JUN 1986
%I Department of Information and Computer Science, University of California,
Irvine
%R 86-15
%K AI04 O06
%X 15 pages $2.00
%A Peter J. Denning
%T Expert Systems
%I Research Institute for Advanced Computer Science, NASA Ames Research Center
%R 85.17
%K AI01
%A Peter J. Denning
%T Will Machines Ever Think?
%I Research Institute for Advanced Computer Science, NASA Ames Research Center
%R 86.12
%K AI16
%A Ajay Rastogi
%A Sargur N. Srihari
%T Recognizing Textual Blocks in Document Images Using the Hough Transform
%I Department of Computer Science State University of New York at Buffalo
%R 86-01
%K AI06 AA14
%X 1.00 /1.50
%A Pudcode Swaminathan
%A Sargur N. Srihari
%T Document Image Binarization: Second Derivative Versus Adaptive Thresholding
%I Department of Computer Science State University of New York at Buffalo
%R 86-02
%K AI06
%X $1.00/ $1.50
%A William J. Rapaport
%T Philosophy of Artificial Intelligence: A Course Outline
%I Department of Computer Science State University of New York at Buffalo
%R 86-03
%K AI16 AT18
%X $1.00/$1.50
%A Shoshana L. Hardt
%A William J. Rapaport
%T Recent and Current Ai Research in the Department of Computer Science,
SUNY-Buffalo
%I Department of Computer Science State University of New York at Buffalo
%R 86-05
%K AT21
%X $1.00/$1.50
%A Kemal Eboioglu
%T An Expert System for Harmonization of Chorales in the Style of J. S. Bach
%I Department of Computer Science State University of New York at Buffalo
%R 86-09
%K AA25 AI01
%X $3.00/$4.00 289 pages
%A Stuart C. Shapiro
%T Symmetric Relations, Intensional Individuals, and Variable Binding
%I Department of Computer Science State University of New York at Buffalo
%R 86-10
%K AI16 AI02 AI01
%X dealing with relations such as "are adjacent" and "are related"
%A Sargur N. Srihari
%A Jonathan J. Hull
%A Paul W. Palumbo
%A Ching-Huei Wang
%T Automatic Address Block Locatino: Analysis of Images and Statistical Data
%I Department of Computer Science State University of New York at Buffalo
%R 86-11
%K AI06
%X finding the destination address on a letter, magazine or parcel for
the post office
63 pages $1.00/$1.50
%A S. L. Hardt
%A D. H. Macfadden
%A M. Johnson
%A T. Thomas
%A S. Wroblewski
%T The Dune Shell Manual: Version 1
%I Department of Computer Science State University of New York at Buffalo
%R 86-12
%K AI01 AA11 AA18 T03 common sense
%X DUNE is Diagnostic Understanding of Natural Events, a shell that has
been applied to threat assessment, personality assessment and common sense
reasoning
$1.00/$1.50
%A Janyce M. Wiebe
%A William J. Rapaport
%T Representing de re and de dicto belief reports in discourse and narrative
%I Department of Computer Science State University of New York at Buffalo
%R 86-14
%K AI02 AI16
%X $1.00/$1.50
%A William J. Rapaport
%A Stuart C. Shapiro
%A Janyce M. Wiebe
%T Quasi-Indicators, Knowledge Reports, and Discourse
%I Department of Computer Science State University of New York at Buffalo
%R 86-15
%K AI02 AI16 de re de dicto
%X $1.00/$1.50
%A David E. Rumenhart
%A James L. McClelland
%T Parallel Distributed Processing:
Explorations in the Microstructures of Cognition,
%I Library of Computer Science
%K AT15 AI04 AI03 AI08
%X Two volume set for $35.95. Volume I: Foundations
Volume II: Psychological Models
%A Christian Lengauer
%T A View of Automated Proof Checking and Proving
%R TR-86-16
%D JUN 1986
%I University of Texas at Austin, Department of Computer Sciences
%K AI11
%X $1.50
%A Manuel V. Hermengildo
%T An Abstract Machine Based Execution Model for Computer Architecture
Design and Efficient Implementation of Logic Programs in Parallel
%R TR-86-20
%D JUN 1986
%I University of Texas at Austin, Department of Computer Sciences
%K AI10 H03
%X $5.00
%A Nicholas V. Findler
%A Timothy W. Bickmore
%A Robert F. Cromp
%T A General-Purpose Man-Machine Environment to Aid in Decision Making
and Planning with Special Reference to Air Traffic Control
%I Arizona State University, Computer Science Department
%R TR-84-001
%K AI13 AI09 O01
%A Nicholas V. Findler
%A Timothy W. Bickmore
%A Robert F. Cromp
%T A General-Purpose Man-machine Environment with Special Reference to
Air Traffic Control
%I Arizona State University, Computer Science Department
%R TR-84-002
%K AI13 AI09 O01
%A Nicholas V. Findler
%A Ron Lo
%T An Examination of Distributed Planning in the World of Air Traffic
Control
%I Arizona State University, Computer Science Department
%R TR-84-004
%K AI13 AI09 O01
%A Ben Huey
%T Using Register Transfer Languages for Knowledge-Based Automatic
Test Generation
%I Arizona State University, Computer Science Department
%R TR-84-011
%K AA04
%A F. Golshani
%T Tools for the Construction of Expert Database Systems
%I Arizona State University, Computer Science Department
%R TR-84-013
%K AA09 AI01
%A Ben M. Huey
%T The Heuristic State Search Algorithm
%I Arizona State University, Computer Science Department
%R TR-84-018
%K AI03
%A F. Golshani
%A A. Faustin
%T The Eductive (sic) Knowledge Engine-Preliminary Investigations
%I Arizona State University, Computer Science Department
%R TR-84-023
%K AI16
%A A. L. Pai
%A J. W. Pan
%T A Computer Graphics Kinematic Simulation System for Robot
Manipulators
%I Arizona State University, Computer Science Department
%R TR-85-003
%K AI07
%A Nicholas V. Findler
%T Air Traffic Control, A Challenge for Artificial Intelligence
%I Arizona State University, Computer Science Department
%R TR-85-006
%K AI16
%A Richard L. Madarasz
%A Loren C. Heiny
%A Norm E. Berg
%T The Design of an Autonomous Vehicle for the Handicapped
%I Arizona State University, Computer Science Department
%R TR-85-010
%K AI07 AA19
%A N. V. Findler
%A P. Bhaskaran
%A Ron Lo
%T Two Theoretical Issues Concerning Expert Systems
%I Arizona State University, Computer Science Department
%R TR-85-012
%K AI01
%A Richard Madarasz
%A Kathleen M. Mutch
%A Loren C. Heiny
%T A Low-Cost Binocular Imaging System for Research and Education
%I Arizona State University, Computer Science Department
%R TR-85-013
%K AI06 AT18
%A Robert F. Cromp
%T The Task, Design and Approach of the Advice Taker/Inquirer
System
%I Arizona State University, Computer Science Department
%R TR-85-014
%K AI16
%A Kathleen M. Mutch
%T The Perception of Translation in Depth Using Stereoscopic Motion
%I Arizona State University, Computer Science Department
%R TR-85-015
%K AI06
%A Ron Lo
%A Cher Lo
%A N. V. Findler
%T A Pattern Search Technique for the Optimization Module of a
Morph-Fitting Package
%I Arizona State University, Computer Science Department
%R TR-86-001
%K AI03
%A N. V. Findler
%A Laurie Igrif
%T Analogical Reasoning by Intelligent Robots
%I Arizona State University, Computer Science Department
%R TR-86-003
%K AI07 AI16
%A Nicholas V. Findler
%T The Past, Present and Future of Artificial Intelligence -
A Personal View
%I Arizona State University, Computer Science Department
%R TR-86-004
%K AT14
%A Stephen Fickas
%T Automating the Transformational Development of Software
%R CIS-TR-85-01
%I Computer and Information Science Department, University of Oregon
%C Eugene, OR
%D 1985
%K AA08
%A John S. Conery
%A Dennis F. Kibler
%T AND Parallelism and Nondeterminism in Logic Programs
%R CIS-TR-85-02
%I Computer and Information Science Department, University of Oregon
%C Eugene, OR
%D 1985
%K H03 AI10
%A Stephen Fickas
%A David Novick
%A Rob Reesor
%T Building Control Strategies in a Rule-Based System
%R CIS-TR-85-04
%I Computer and Information Science Department, University of Oregon
%C Eugene, OR
%D 1985
%K AI01 metaknowledge
%A Stephen Fickas
%A David Novick
%T Control Knowledge in Expert Systems: Relaxing Restrictive Assumptions
%R CIS-TR-85-05
%I Computer and Information Science Department, University of Oregon
%C Eugene, OR
%D 1985
%K AI01 metaknowledge
%A Stephen Fickas
%T Design Issues in a Rule-Based System
%R CIS-TR-85-06
%I Computer and Information Science Department, University of Oregon
%C Eugene, OR
%D 1985
%K AI01 metaknowledge
%A Kent A. Stevens
%A Allen Brookes
%T The Concave Cusp as a Determiner of Figure Ground
%R CIS-TR-85-08
%I Computer and Information Science Department, University of Oregon
%C Eugene, OR
%D 1985
%K AI06
%X (of interest to researchers on texture perception )
%A Stephen Fickas
%A David Novick
%A Rob Reesor
%T An Environment for Building Rule-Based Systems: An Overview
%R CIS-TR-85-10
%I Computer and Information Science Department, University of Oregon
%C Eugene, OR
%D 1985
%K AI01 T03
%A Stephen Fickas
%T A Knowledge-Based Approach to Specification Acquisition and Construction
%R CIS-TR-85-13
%I Computer and Information Science Department, University of Oregon
%C Eugene, OR
%D 1985
%K AA08
%A Kazem Taghva
%T Constructive Fully Abstract Models of Typed Lambda-Calculi
%R CSR 159
%I Computer Science Department, New Mexico Tech
%C Socorro, NM
%D DEC 1983
%K AA08
%A Allan M. Stavely
%T Inference From Models of Software Systems
%R CSR 162
%I Computer Science Department, New Mexico Tech
%C Socorro, NM
%D MAY 1984
%K AA08
%A Raymond D. Gumb
%A Sarah Bottomley
%A Alex Trujillo
%T Sandia National Laboratories SURP Grant 95-2931 Final Report:
Filming a Terrain Under Uncertainty Using Temporal and Probabilistic Reasoning
%R CSR 172
%I Computer Science Department, New Mexico Tech
%C Socorro, NM
%D AUG 1986
%K AI06 O04
%A Andrew W. Appel
%T Garbage Collection Can Be Faster than Stack Allocation
%R TR-045-86
%D JUN 1986
%I Princeton University, Department of Computer Science
%K H02 T01
%A Richard J. Lipton
%A Daniel Lopresti
%A J. Douglas Welsh
%T The Total DNA Homology Experiment
%R TR-020-86
%I Princeton University, Department of Computer Science
%K AA10 O06
%X plan to compare all known DNA sequences with each other to find homologies
They will be using a systolic array for DNA sequence matching and hope
to complete the project within one years time.
%A Bernard Nadel
%T Representation-Selection for Constraint Satisfaction Problems: A Case
Study Using n-Queens
%D MAR 1986
%R CRL-TR-5-86
%I University of Michigan, Computer Research Laboratory
%K AI03 AA17
%A Bernard Nadel
%T Theory-Based Search-Order Selection for Constraint Satisfaction Problems
%D APR 1986
%R CRL-TR-6-86
%I University of Michigan, Computer Research Laboratory
%K AI03
%A J. T. Park
%A T. J. Teory
%T Heuristics for Data Allocation in Local Area
%D MAY 1986
%R CRL-TR-7-86
%I University of Michigan, Computer Research Laboratory
%K AA09
%X describes heuristics for allocating data where update is done by
broadcast
%A K. Shin
%A P. Ramanathan
%T Diagnosis of Malicious Processors in a Distributed Computing System
%D MAY 1986
%R CRL-TR-8-86
%I University of Michigan, Computer Research Laboratory
%K AA21
%A Hary H. Porter, III
%T Earley Deduction
%R CS/E 86-002
%I Oregon Graduate Center
%D 1986
%K T02 Datalog
%A Clifford Walinsky
%T Constructive Negation in Horn-Clause Programs
%R CS/E 86-003
%I Oregon Graduate Center
%D 1986
%K AI10
%A Dennis M. Volpano
%T Translating an FP Dialect to L - A Proof of Correctness
%R CS/E 85-001
%I Oregon Graduate Center
%D 1985
%K AA08
%A Richard B. Kieburtz
%T The G-Machine: A Fast, Graph-Reduction Evaluator
%R CS/E 85-002
%I Oregon Graduate Center
%D 1985
%A Richard B. Kieburtz
%T Incremental Collection of Dynamic, List-Structured Memories
%I Oregon Graduate Center
%D 1985
%R CS/E 85-008
%K T01 H03
%X incremental garbage collection
%A Ashoke Deb
%T An Efficient Garbage Collector for Graph Machines
%I Oregon Graduate Center
%D 1984
%R CS/E 84-003
%K H03
%A John S. Givler
%T Pattern Recognition in FP Programs
%I Oregon Graduate Center
%D 1983
%R CS/E 83-003
%K O06 AI06
------------------------------
End of AIList Digest
********************
∂17-Oct-86 0526 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #222
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 17 Oct 86 05:26:14 PDT
Date: Thu 16 Oct 1986 22:18-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #222
To: AIList@SRI-STRIPE
AIList Digest Friday, 17 Oct 1986 Volume 4 : Issue 222
Today's Topics:
Bibliography - Leff Bibliography Continuation #3
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Bibliography (continued)
%A L. S. Fainzilberg
%A G. A. Shklyar
%T Estimation of Attribute Utility in Statistical Recognition of Two Classes
%J Soviet J. Automat. Inform. Scie.
%V 189
%N 5
%P 81-86
%K O04 O06
%A Peter Naur
%T Thinking and Turing's Test
%J BIT
%V 26
%D 1986
%N 2
%P 175-187
%K AI16
%A Persi Diaconis
%A Mehrdad Shahshani
%T Products of Random Matrices and Computer Image Generation
%B Random Matrices and Tehir Applications
%P 173-182
%S Contemp. Math.
%V 50
%I Amer. Math. Soc.
%C Providence, R. I.
%D 1986
%K AI06
%A V. A. Nepomnyaschii
%T Problem-oriented Program Verification
%J Programmirovanie 1986
%N 1
%P 3-13
%K AA08
%A Yong Qiang Sun
%A Bao Xing Tang
%T Strong Verification of Nested-loop Programs
%J J. Shanghai Jiatong Univ.
%D 1984
%N 6
%P 1-10
%K AA08
%X Chinese with English summary
%A A. Browne
%T Vision and the Robot
%J Philips Journal of Research
%V 41
%N 3
%D 1986
%P 232-246
%K AI06 AI07
%A B. J. Falkowski
%A L. Schmitz
%T A Note on the Queen's Problem
%J Information Processing Letters
%V 23
%N 1
%D JUL 20, 1986
%K AI03 AA17
%A Huia-Chuan Chen
%A J. H. Fang
%T A Heuristic Search Method for Optimal Zonation of Well Logs
%J Mathematical Geology
%V 18
%N 5
%D 1986
%P 489-500
%K AA03 AI06 AI03
%X improved the Houwkins and Merium algorithm by 7 to 50 fold
%A A. Kong
%A G. O. Barnett
%A F. Mosteller
%A C. Youtz
%T How Medical Professional Evaluate Expressions of Probability
%J New England Journal of Medicine
%V 315
%D SEP 18, 1986
%N 12
%P 740-744
%K AA01 AI02 AI01
%X explains what is meant in probabilistic terms by doctors by such
phrases as "likely"
%A G. Vontrzebiatowski
%A B. Bank
%T On the Convergence of the Fuzzy Clustering Algorithm Fuzzy ISODATA
%J Zeitschrift fur Angewandte Mathematik und Mechanik
%P 201-208
%V 66
%N 6
%D 1986
%K O04 O06
%A Makoto Kaneko
%A Minoru Abe
%A Eiichi Horiuchi
%A Kazuo Tanie
%T Study on Hexapod Walking Machine using an Approximate Straight Line Mechanism
Third Report; A Control Method for Proceeding over Soft Ground
%J Journal of Mechanical Engineering Laboratory
%V 40
%N 4
%D JUL 1986
%K AI07
%A A. G. Erdman
%A T. Thompson
%A D. R. Riley
%T Type Selection of Robot and Gripper Kinematic Topology Using Expert Systems
%J International Journal of Robot Research
%V 5
%N 2
%D 1986
%P 183
%K AA05 AI01 AI07
%X [. There are many other articles on this issue on robot kinematics (this was
a special issue). I do not include those in this bibliography.]
%A James J. Clark
%A Peter D. Lawrence
%T A Theoretical Basis for Diffrequency Stereo
%J MAG80
%P 1-19
%K AI06
%A Brian G. Schunck
%T The Image Flow Constraint Equation
%J MAG80
%P 20-46
%K AI06
%A Teresa M. Silberberg
%A David A. Harwood
%A Larry S. Davis
%T Object Recognition Using Oriented Model Points
%J MAG80
%P 47-71
%K AI06
%A Haluk Derin
%A William S. Cole
%T Segment of Textured Images Using Gibbs Random Fields
%J MAG80
%P 72-98
%K AI06
%A Theo Pavlidis
%T A Vectorizer and Feature Extractor for Document Recognition
%J MAG80
%P 111
%K AI06
%A Matthew Hennessy
%T Proving Systolic Systems Correct
%J ACM Transactions on Programming Languages and Systems
%V 8
%N 3
%D JUL 1986
%P 344-387
%K AA08 AA04 AI11
%A Krzysztof R. Apt
%T Correctness Proofs of Distributed Termination Algorithms
%J ACM Transactions on Programming Languages and Systems
%V 8
%N 3
%D JUL 1986
%P 388-407
%K AA08 AI11
%A A. Pathak
%A S. K. Pal
%T A Generalized Learning Algorithm Based on Guard Zones
%J MAG81
%P 63-70
%K AI04
%A S. Larsen
%A L. N. Kanal
%T Analysis of k-nearest Neighborhood Branch and Bound Rules
%J MAG81
%P 71-78
%K AI03
%A F. Pasian
%A C. Vuerli
%T Core-line Tracing for Fuzzy Image Subsets
%J MAG81
%P 93
%K O04 AI06
%A O. R. Polonskaya
%T Logic-Semantic Connectors of the English Language as Formal Indicators of
Text Coherence
%J Nauchno-Tekhnicheskaya Informatsiya Seriya II - Informatsionnye Protessy
I Systemy
%N 6
%D 1986
%P 19-22
%K AI02
%A J. Victor
%T Bell-Labs Models Parallel Processor on Neural Networks
%J Mini-Micro Systems
%V 19
%N 10
%D AUG 1986
%P 43+
%K AI12 H03
%A C. Dede
%T A Review and Synthesis of Recent Research in Intelligent Computer-Assisted
Instruction
%J MAG82
%P 329-354
%K AA07 AT08 AT21
%A J. S. Greenstein
%A L. Y. Arnaut
%A M. E. Revesman
%T An Empirical Comparison of Model-Based and Explicit Communication
for Dynamic Human-Computer Task Allocation
%J MAG82
%P 355-364
%K AI08 O01
%A C. G. Leedham
%A A. C. Downton
%T On-Line Recognition of Pitman Handwritten Shorthand
%J MAG82
%P 375-394
%K AI06
%A P. N. Crowley
%T The Use of Q-Analysis and Q-Factor Weightings to Derive Clinical Psychiatric
Syndromes
%J MAG82
%P 395-408
%K AA11 O04
%A Concettina Guerra
%T A VLSI Algorithm for the Optimal Detection of a Curve
%J MAG83
%P 206-214
%K AI06
%A Bruce K. Hillyer
%A David Elliot Shaw
%T Execution of OPS5 Production Systems on a Massively Parallel Machine
%J MAG83
%P 236-268
%K H03 AI01
%A Salvatore J. Stolfo
%A Daniel P. Miranker
%T The DADO Production System Machine
%J MAG83
%P 269
%K AI01 H03
%A Gerrit Broekstra
%T Organizational Humanity and Architecture: Duality and Complementarity of PAPA
-Logic and MAMA-Logic in Managerial Conceptualizations of Change
%J MAG84
%P 13-42
%K AI08 AA11 AA06
%A Stuart A. Umpleby
%T Self-Authorization: A Characteristic of Some Elements in Certain Self-Organiz
ing Systems
%J MAG84
%P 79-88
%K H03 AI12
%A R. M. Lougheed
%A C. M. Swonger
%T An Analysis of Computer Architectural Factors Contributing to Image Processor
Capacity
%B BOOK53
%P 3-13
%K AI06
%A O. R. Hinton
%A H. G. Kim
%T A Bit-Sequential VLSI Pixel-Kermel Processor for Image Processing
%B BOOK53
%P 14-20
%K AI06 H03
%A D. J. Skellern
%T A Very Large Scale Integration (VLSI) System for Image Reconstruction
from Projections
%B BOOK53
%P 21-26
%K AI06
%A P. W. Besslich
%T Parallel Architecture for Line-Scanned Images
%B BOOK53
%P 27-35
%K AI06 H03
%A R. P. W. Duin
%A H. Haringa
%A R. Zeelen
%T A Hardware Design for Fast 2-D Percentile Filtering
%B BOOK53
%P 36-40
%K AI06
%A R. Boekamp
%A F. C. A. Groen
%A F. A. Gerritsen
%A R. J. Vanmunster
%T Design and Implementation of a Cellular Logic VME Processor Module
%B BOOK53
%P 41-45
%K AI06
%A J. L. Basille
%A S. Castan
%T Multilevel Architectures for Image Processing
%B BOOK53
%P 46-53
%K AI06 H03
%A J. Rommelaere
%A L. Vaneycken
%A P. Wambacq
%A A. Oosterlinck
%T A Microprogrammable Processor Architecture for Image Processing
%B BOOK53
%P 59-67
%K AI06
%A M. Suk
%A S. S. Pyo
%T A Geometry Processor for Image Processing and Pattern Recognition
%B BOOK53
%P 68-73
%K AI06
%A P. W. Pachowicz
%T Image Processing by a Local-SIMD Co-Processor
%B BOOK53
%P 82-87
%K AI06 AI03
%A V. Cantoni
%A L. Carrioli
%A O. Catalano
%A L. Cinque
%A V. Digesu
%A M. Ferretti
%A G. Gerardi
%A S. Levialdi
%A R. Lombardi
%A A. Machi
%A R. Sterfanelli
%T The Papia Image Analysis System
%B BOOK53
%P 88-97
%K AI06
%A J. Ronsin
%A D. Barba
%A S. Raboisson
%T Comparison Between Cooccurrence Matrices, Local Histograms and Curvilinear
Integration for Texture Characterization
%B BOOK53
%P 98-104
%K AI06
%A N. Lins
%T Refinement of Spectral Methods for Use in Texture Analysis
%B BOOK53
%P 105-111
%K AI06
%A M. Slimani
%A C. Roux
%A A. Hillioun
%T Image Segmentation by Cluster Analysis of High Resolution Textured SPOT Image
s
%B BOOK53
%P 112-119
%K AI06
%A A. Beckers
%A L. Dorst
%A L. T. Young
%T The Choice of Filter Parameters for non-Linear Grey-Value Image Processing
%B BOOK53
%P 120-128
%K AI06
%A J. Illingworth
%A J. Kittler
%T A Parallel Threshold Selection Algorithm
%B BOOK53
%P 129-134
%K AI06 H03
%A R. Samy
%T An Adaptive Image Sequence Filtering Scheme Based on Motion Detection
%B BOOK53
%P 135-144
%K AI06
%A B. K. Ghaffary
%T A Review of Image Matching Techniques
%B BOOK53
%P 164-172
%K AI06
%A K. Martinez
%A D. E. Pearson
%T PETAL A Parallel Processor for Real-Time Primitive Extraction
%B BOOK53
%P 173-175
%K AI06 O03
%A T. J. Dennis
%A L. J. Clark
%T Real Time Detection of Spot-Type Defects
%B BOOK53
%P 178-183
%K AI06 O03
%A E. Egeli
%A F. Klein
%A G. Maderlechner
%T Model-Based Instantiation of Symbols from Structurally Related Image
Primitives
%B BOOK53
%P 184-189
%K AI06
%A R. L. Shoemaker
%A P. H . Bartels
%A H. Bartels
%A W. G. Griswold
%A D. Hillman
%A R. Maenner
%T Image-Data-Driven Dynamically-Reconfigurable Multiprocessor System in
Automated Histopathology
%B BOOK53
%P 190-198
%K AI06 AA10
%A T. Lorch
%A J. Bille
%A M. Frieben
%A G. Stephan
%T An Automated Biological Dosimetry System
%B BOOK53
%P 199-206
%K AI06 AA10
%A C. Katsinis
%A A. D. Poularikas
%T Pattern Recognition of Zooplankton Images Using a Circular Sampling
Technique
%B BOOK53
%P 207-211
%K AI06 AA10
%A D. Lecomte
%A J. Beullier
%A D. Grangeon
%T Image Porcessing Adapted to Radiographs
%B BOOK53
%P 212-218
%K AI06 AA01
%A Zohar Manna
%A Richard Waldinger
%T The Logical Basis for Computer Programming. Vol I. Deductive
Reasoning
%I Addison-Wesley Publishing Co.
%C Reading, MASS
%D 1985
%K AA08 AI11 AT15
%A A. S. Morozov
%T Logic with Incomplete Information as an Information System in the Sense
of Scott
%J Vychisl. Sistemy NO. 107
%D 1985
%P 71-79
%K AI16
%A B. C. Moszkowski
%T Executing Temporal Logic Programs
%I Cambridge University Press
%C Cambridge-New York
%D 1986
%K AT15 AI10
%X ISBN 0-521-31099-7
%A A. P. Sistla
%A E. M. Clarke
%A N. Francez
%A A. R. Meyer
%T Can Message Buffers be Axiomatized in Linear Temporal Logic
%J Inform. and Control
%V 63
%N 1-2
%P 88-112
%K AA08 AI11
%A Wolfgang Wechler
%T R-fuzzy Computation
%J J. Math. Anal. Appl.
%V 115
%D 1986
%N 1
%P 225-232
%K O04
%A Luis Aguila Feros
%A Jose Ruiz Shulcloper
%T A Bm-Algorithm for Processing K-valent Data in Recognition Problems
%J Cinc. Mat. (Havana)
%V 5
%D 1984
%N 3
%P 89-101
%K AI16
%X Spanish. English Summary
%A Vincent Digricoli
%A Malcolm Harrison
%T Equality-based Binary Resolution
%J JACM
%V 33
%D 1986
%N 2
%P 253-289
%K AI11
%A M. H. van Emden
%T Quantitative Deduction and its Fix-Point Theory
%J MAG85
%P 37-53
%K AI10
%A Hong Fan
%A Jorge L. C. Sanz
%T Comments on "Direct Fourier Reconstruction in Computer Tomography"
[IEEE Trans. Acoust. Speech Signal Process 29 (1981) no. 2. 237-245
by H. Stark, J. W. Woods, I. Paul and R. Hingorani
%J IEEE Trans. Acoust. Speech Signal Process.
%V 33
%D 1985
%N 2
%P 446-449
%K AI06 AA01 AT13
%A D. M. Gabbay
%A M. J. Sergot
%T Negation as Inconsistency
%J MAG85
%P 1-35
%K AI10
%A Han Rong Lu
%T Some Problems in Logic Program Design
%J Comput. Sci
%D 1986
%N 1
%P 38-39
%K AI10 O02
%X (Chinese)
%A Anca L. Ralescu
%T A Note on Rule Representation in Expert Systems
%J Inform. Sci
%V 38
%D 1986
%N 2
%P 193-203
%K AI01
%A Yu A. Zuev
%T Probabilistic Model of a Committee of Classifiers
%J Zh. Vychisl. Mat. i Mat. Fiz
%V 26
%D 1986
%N 2
%P 276-292
%K H03 O04
%X (russian)
%A Irena Pevac
%T Heuristic for Avoiding Skolemization in Theorem Proving
%J Publ. Inst. Math. (Beograd) (N. S.)
%V 38
%N 52
%D 1985
%P 207-213
%K AI11
%A A. A. Voronkov
%T A Method of Search for a Proof
%J Vychisl. Sistemy No. 107
%D 1985
%P 109-123
%K AI03 AI11
%X (Russian)
%A Kiem Hoang
%T Geometric Transforms of Digital Images
%J Rostock. Math. Kolloq. No 28
%D 1985
%P 87-98
%K AI06
%A Jacques Loeckx
%A Kurt Sieber
%A Ryan D. Sansifer
%T The Foundations of Program Verication
%S Wiley-Teubner Series in Computer Science
%I John Wiley and Sons
%C Chichester
%D 1984
%K AT15 AA08 AI11
%X ISBN 0-471-90323-X
%A Andreas Blass
%A Yuri Gurevich
%A Dexter Kozen
%T A zero-one Law for Logic with a Fixed-point Operator
%J Information and Control
%V 67
%D 1985
%N 1-3
%P 70-90
%K AI11
%A J. Sakalauskaite
%T Axiom Systems for Proving the Equivalence of Compositions of Simple
Assignments
%J Mat. Logika Primenen. No. 1
%D 1981
%P 109-132
%K AA08 AI11
%X Russian. English and Lithuanian summaries
%A A. Prasad Sistla
%A Moshe Y. Vardi
%A Pierre Wolper
%T The Complementation Problem fo Buchi Automata with Applications to
Temporal Logic
%B BOOK52
%P 465-475
%K AI11
%A Colin Stirling
%T A Complete Modal Proof System for a Subset of SCCS
%B BOOK47
%P 235-266
%K AA08 AI11
%A Colin Stirling
%T A Complete Compositional Modal Proof System for a subset of CCS
%B BOOK52
%P 475-486
%K AA08 AI11
%A Rimgaudas Zaldokas
%T Construction of Term Rewriting Rules for Abstract Data Types
%J Mat. Logika Primenen No. 1
%D 1981
%P 9-19
%K AI14
%A Lev Goldfarb
%T A New Approach to Pattern Recognition
%B Progress in Pattern Recognition
%P 241-402
%S Machine Intell. Pattern Recognition
%V 1
%I North-Holland
%C Amsterdam, New York
%D 1985
%A Jieh Hsiang
%A Mandayam Srivas
%T PROLOG-based Inductive Theorem Proving
%B BOOK40
%P 129-149
%K AI10 AI11
%A Neil D. Jones
%A Alan Mycroft
%T Stepwise Development of Operational and Denotational Semantics
of Prolog
%B BOOK50
%P 281-288
%K AI10 AI11 O02
%A Kenneth M. Kahn
%T A Primitive for the Control of Logic Programs
%B BOOK50
%P 242-251
%K AI10
%A Prateek MIshra
%T Towards a Theory of Types in Prolog
%B BOOK50
%P 289-298
%K AI10 O02
%A David A. Plaisted
%T The Occur-Check Problem in Prolog
%B BOOK50
%P 272-280
%K AI10
%A Zbigniew Ras
%A Maria Zemankova-Leech
%T Rough Sets Based Learning Systems
%B Computation Theory (Zaborow, 1984)
%S Lecture Notes in Computer Science
%V 275
%I Springer-Verlag
%C Berlin-Heidelberg-New York
%D 1985
%P 263-275
%K AI04
%A Mark E. Stickel
%T A PROLOG Technology Theorem Prover
%B BOOK50
%P 211-217
%K T02 AI11
%A Hisao Tamaki
%T Semantics of a Logic Programming Language with a Reducability
Predicate
%B BOOK50
%P 259-264
%K AI10
%A Raymond Turner
%T Logics for Artificial Intelligence
%I Ellis Horwood
%C Chichester
%D 1985
%A Michael J. Wise
%A David M. W. Powers
%T Indexing Prolog Clauses Via Superimposed Code Words and Field Encoded
Words
%B BOOK50
%P 203-210
%K T02
%A Kathy Yelick
%T Combining Unification Algorithms for Confined Regular Equational
Theories
%J BOOK54
%P 365-380
%K AI14
%A Pierre Rety
%A Claude Kirchner
%A Helene Kirchner
%A Pierre Lescanne
%T NARROWER: A New Algorithm for Unification and its Application
to Logic Programming
%J BOOK54
%P 141-157
%K AI10 AI11
%A Harvey Abramson
%T Definite Clause Translation Grammars
%B BOOK50
%P 233-240
%K AI11
%A Marta Cialdea
%T Some Remarks on the Possibility of Extending Resolution Proof Procedures
to Intuitionistic Logic
%J Inform. Process. Lett
%V 22
%D 1986
%N 2
%P 87-90
%K AI10 AI11
%A Stavros S. Cosmadakis
%A Paris C. Kanellakis
%T Two Applications of Equational Theories to Database Theory
%B BOOK54
%P 107-123
%K AI10 AA09 AI11
%A Amitava Bagchi
%A A. Mahanti
%T Three Approaches to Heuristic Search in Networks
%J JACM
%V 32
%D 1985
%N 1
%P 1-27
%K AI03
%A G. Gottlob
%A A. Leitsch
%T On the Efficiency of Subsumption Algorithms
%J JACM
%V 32
%D 1985
%N 2
%P 280-295
%K AI11
%A O. K. Khanmamedov
%T Approximating Perceptron and Convergence of a Process of Training a
Classifier
%J Akad. Nauk. Azerbaidzhan. SSR Dokl.
%V 41
%D 1985
%N 8
%P 8-11
%K AI04 AI06
%X Russian with English and Azerbaijani Summaries
%A V. S. Neiman
%T Unattainable Subgoals in Searching for an Inference from a Goal
%B Complexity Problems of Mathematical Logic
%P 68-72
%I Kalinin. Gos. Univ.
%C Kalinin
%D 1985
%K AI16
%X Russian
%A Bernard Silver
%T Meta-level Inference. Representing and Learning Control Information
in Artificial Intelligence
%S Studies in Computer Science and Artificial Intelligence
%V 1
%I North-Holland Publishing Co.
%C Amsterdam-new York
%D 1986
%K AT15 AI04 AI03 AI16
%X ISBN-0-444-87900-5
%A V. I. Vasilev
%A F. P. Ovsyannikova
%T Optimization of the Space in Teaching Pattern Recognition
%J Soviet J. Automat. Inform. Sci
%D 1985
%N 3
%P 6-14
%V 18
%K AI04 AI06
%A Dennis de Champeaux
%T About the Paterson-Wegman Linear Unification Algorithm
%J J. Comput. System Sci
%V 32
%D 1986
%N 1
%P 79-90
%K AI11
%A Da Fa Li
%T Semantic Resolution and Paramodulation for Horn Sets
%J J. Huazhong Univ. Sci. Tech
%V 12
%N 2
%P 13-16
%K AI10 AI11
%X Chinese with English Summary
%A David Harel
%T Dynamic Logic
%B Handbook of Philosophical Logic, Vol II
%P 497-604
%S Synthese Library
%V 165
%C Reidel, Boston
%D 1984
%A A. Hoppe
%T Temporal Logic Specification of Synchronization Primitives
%B BOOK55
%P 455-466
%K AA08
%A Erica Jen
%T Invariant Strings and Pattern-Recognizing Properties of One-Dimensional
Cellular Automata
%J J. Statist. Phys
%V 43
%D 1986
%N 1-2
%P 219-242
%K AI12
%A H. R. Nielson
%T A Hoare-Like Proof System for Total Correctness of Nested Recursive
Procedures
%B BOOK55
%P 227-239
%K AA08
%A Shi Tie Wang
%T Modal Logic and Program Verification
%J Acta Sci. Natur. Univ. Amoien
%V 24
%D 1985
%N 3
%P 300-307
%K AA08
%X Chinese with English Summary
%A S. J. Young
%A C. Proctor
%T UFI - An Experimental Frame Language Based on Abstract Data Types
%J The Computer Journal
%V 29
%N 4
%D AUG 1986
%P 340-347
%K AI16
%A H. J. Eibner
%A D. Holzel
%T Aspects of Expert Systems Applications in Medicine
%J Angewandte Informatik
%N 7
%D JUL 1986
%P 279-284
%K AI01 AA01
%A Q Tian
%A Michael N. Huhns
%T Algorithms for Subpixel Registration
%J MAG86
%P 220-233
%K AI06
%A Vladimir Kim
%A Leonid Yaroslavskii
%T Rank Algorithms for Picture Processing
%J MAG86
%P 234-258
%K AI06
%A Michael H. Brill
%T Perception of Transparency in Man and Machine: A Comment on Beck
%J MAG86
%P 270-271
%K AI06 AI08
%A Ye. K. Gordiyenko
%A V. N. Zakhavov
%T Process Management in Knowledge Bases
%J Soviet Journal of Computer and Systems Sciences
%V 24
%N 1
%D JAN-FEB 1986
%P 81-95
%K H03
%A C. L. Ramsey
%A J. A. Reggia
%A D. S. Nau
%A A. Ferrentino
%T A Comparative Analysis of Methods for Expert Systems
%J International Journal of Man-Machine Studies
%V 24
%N 5
%D MAY 1986
%K AI01
%P 475
%K AI01
%A Bruce L. Golden
%A A. Hevner
%A D. Power
%T Decision Insight Systems for Microcomputers: a Critical Evaluation
%J MAG87
%P 287-300
%K AI13
%A Arjang A. Assad
%A Bruce L. Golden
%T Expert Systems, Microcomputers and Operations Research
%J MAG87
%P 301-322
%K H01 AI01
%A Jeffrey Perrone
%T Down from the Clouds: Notes on "Expert Systems, Microcomputers, and Operation
s Research"
%J MAG87
%P 323-324
%K H01 AI01
%A E. Eugene Carter
%T Creating a Shell-based Expert System
%J MAG87
%P 325-328
%K T03 AI01
%A James A. Reggia
%A Sanjesv B. Ahuja
%T Selecting an Approach to Knowledge Processing
%J MAG87
%P 329-332
%K AI01
%A Richard T. Wong
%T Comment on "Expert Systems, Microcomputers, and Operations Research"
%J MAG87
%P 333
%K AI01
------------------------------
End of AIList Digest
********************
∂17-Oct-86 0840 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #223
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 17 Oct 86 08:40:14 PDT
Date: Thu 16 Oct 1986 22:23-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #223
To: AIList@SRI-STRIPE
AIList Digest Friday, 17 Oct 1986 Volume 4 : Issue 223
Today's Topics:
Bibliography - Leff Bibliography Continuation #4
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Bibliography (continued)
%A W. H. H. J. Lunscher
%A M. P. Beddoes
%T Optimal Edge Dector Evaluation
%J IEEE Transactions on Systems, Man and Cybernetics
%V SMC-16
%N 2
%D MAR/APR 1986
%P 304-312
%K AI06
%A L. F. Chaparro
%A M. Boudaoud
%T Image Multimodeling and a Two-Dimensional Multicategory Wiener Filter
%J IEEE Transactions on Systems, Man and Cybernetics
%V SMC-16
%N 2
%D MAR/APR 1986
%P 312-316
%K AI06
%A Mitch Betts
%T In with Electronic Filing System, Out with Antique Regulations
%J ComputerWorld
%D JUL 21, 1986
%V 20
%N 29
%P 15
%K AI02 AA14 AA06 Securities and Exchange Commission SEC
Internal Revenue Service IRS
%X The Securities and Exchange Commission is requiring companies
to do their mandatory filings on computer readable media. The
SEC has tried an AI system to extract financial data from the
reports to be input into calculations. This worked with a
94 percent success rate but the SEC is now requiring that these
figures be tagged for easy extraction. The IRS would like to
store tax returns on optical disk and then destroy the
paper copies but the Department of Justice is opposed because
this would prevent forensic examination of fingerprints or
signatures on the physical returns.
%A Charles Babcock
%T AI to drive 5GL Software
%J ComputerWorld
%D JUL 21, 1986
%V 20
%N 29
%P 23+
%K George Schussel AA08 AA06 AT14
%X George Schussel, president of Digital Consulting
Associates said that AI would become part of fifth
generation languages to help automate the programming
of business software systems.
%A Eddy Goldberg
%T Expert System Financial Tool Out for Small Business
%J ComputerWorld
%D JUL 21, 1986
%V 20
%N 29
%P 28
%K AT02 H01 AA06 Sterling Wentworth Businessplan
financial planner
%X Sterling Wentworth announced that Businessplan would
be released in August. This is a tool for financial
planners and contains 7500 decision rules and 500 parameters
that can be adjusted by the financial planner for his
philosophy and style. It costs $4500 and runs on IBM
PC's.
%A Leilani Allen
%T The Cost of an Expert
%J ComputerWorld
%D JUL 21, 1986
%V 20
%N 29
%P 59-68
%K Knowledge Consortium Campbell's Soup Company AI01
%X quantifies the cost of a human expert in salary, overhead,
etc., so that people can judge whether building an expert system
to replace him is worth the investment.
%T Tool Lets PC, 370 Share Applications
%J ComputerWorld
%D JUL 21, 1986
%V 20
%N 29
%P 81
%K H01 T03 Aion MVS AI01
%X Aion has two shell products, one for the IBM PC and the other for
IBM mainframes under MCS which are fully compatible so that applications
can be shared. The MVS version sells for $60,000.
%T AI Eases Conversion form CAD to NC Format
%J Electronics
%D MAR 31, 1986
%P 67-68
%V 59
%N 13
%K AA26
%X PMX is selling an AI system that will convert IGES standard data to
numerical control programs. It runs on PC/XT's and costs for $8500,
$11,500 for 3d capabilities
%A K. W. Ng
%A W. Y. Ma
%T Pitfalls in Prolog Programming
%J SIGPLAN Notices
%V 21
%N 4
%D APR 1986
%P 75-79
%K T02
%A Gerardo Cisneros
%A Harold V. Mcintosh
%T Introduction to the Programming Language Convert
%J SIGPLAN Notices
%V 21
%N 4
%D APR 1986
%P 48-57
%K H01
%X A new applicative and transformation based language that runs on 8080 and 808
6
based systems
%A T. L. Huntsberger
%A C. Rangarajan
%A S. N. Jayaramamurthy
%T Representation of Uncertainty in Computer Vision Using Fuzzy Sets
%J IEEE Transactions on Computers
%V C-35
%D FEB 1986
%N 2
%P 145-157
%K Flash O04 AI06
%A Takeshi Yamakawa
%A Tsutomu MIki
%T The Current Mode Fuzzy Logic Integrated Circuits Fabricated by the Standard
CMOS Process
%J IEEE Transactions on Computers
%V C-35
%D FEB 1986
%N 2
%P 161-167
%K O04
%T Olivetti, Digtalk Ink Pact to Make AI Pack Version
%J Electronic News
%D MAY 19, 1986
%V 32
%N 1602
%P 44
%K Smalltalk H01 AT16 AI01
%X Olivetti has agreed with Digitalk to jointly develop an advanced version
of Digitalk's Smalltalk for 80286 based systems. Olivetti will use the
system as the primary expert in its Advance Technology Center and will
integrate the resulting products with a proprietary environment.
%A Eddy Goldberg
%T Massively Parallel Processor Introduced
%J Computerworld
%D May 5, 1986
%V 20
%N 18
%P 4
%K connection machine AT02 H03
%X announcement thereof. They have sold six units. Applications demonstrated
include document retrieval fluid dynamics modelling, creating contour maps
from aerial photographs and VLSI design
%A Eric Bender
%T HAL: Just Another Add on
%J Computerworld
%D May 5, 1986
%V 20
%N 18
%P 19+
%K Lotus AI02 H01 AA15
%X HAL, an English language interface, to Lotus 1-2-3 offers
transcripts, unddo commands, self-documenting English language macros,
%A Barbara Robertson
%T AI Typists Now Rates Satisfactory for Novices
%J INfoWorld
%D MAY 19, 1986
%V 8
%N 20
%P 63-64
%K AA15 AT02 H01 AT07 AT03
%X A review of an updated version of this Word Processor, alledgedly using
AI to help with correcting spelling errors. It got the following ratings:
.DS L
Overall: 5.4
Performance: Satisfactory
Documentation: Poor
Ease of Learning: Very Good
Ease of Use: Very Good
Error Handling: Satisfactory
Support: Very Good
Value: Satisfactory
.DE
%A Hank Kee
%T PC Managers Should Not Consider AI a Panacea
%J INfoWorld
%D MAY 19, 1986
%V 8
%N 20
%P 34
%K AI01
%X column argues that we need more applications and less shells and that
AI shells should be oriented towards the non-programmer. [See August
Spang Robinson Report for a list of real Expert Systems that are being
used. LEFF]
%T Expert Systems Firm Gets Boost
%J Electronics
%D MAY 5, 1986
%V 59
%N 19
%P 64
%K France AI01 AT16 Cognitech AA05
%X Cognitech got four million dollars of additional capital.
They got 53 expert system orders including one from Pechiney for a system
analyzing faults in cast aluminum.
%A D. D. Kary
%A P. L. Juell
%T TRC; An Expert System Compiler
%J Sigplan Notices
%V 21
%N 5
%D MAY 1986
%P 64-68
%K air cargo C T03
%X describes an expert system tool which translates input into C code.
It has been applied to an air cargo routing problem. When the system
was translated from a LISP based expert system tool running on the VAX
to TRC, the execution time went down from hours to seconds. The system
consists of an expert system running in conjunction with a mathematical
optimization technique. The expert system handles things like incompatibilities
between objects. The results are as good or better than human experts.
%A J. B. Marti
%A A. C. Hearn
%T REDUCE as a Lisp Benchmark
%J MAG60
%P 8-16
%K T01
%X CPU times for various machines running the REDUCE timing test.
REDUCE is a symbolic math package written in LISP. (There
are more machines and other information in the article)
.TS
tab(~);
l n.
Amdahl 470 V8~7.2
Apollo DN 600~89.9
CDC Cyber 170/825~106.8
DEC 1099~17.7
DEC 2020~122.8
DEC 2060~22.5
DEC VAX 11/750~78.7
DEC VAX 11/780~50.3
Facom M-382~3.6
Hewlett Packard 9836~65.3
Hitachi S-810~2.8
IBM 3031~40.1
IBM 3084~5.4
IBM 4341 MOdel 1~52.0
IBM 4341 Model 2~30.1
Robotron ES-1040~149.2
Sage IV~224.8
Siemens 7890~3.8
SML Darkstar~227.9
Symbolics 3600~45.0
Tektronix 4404~120.1
Xerox Dolphin~322.0
.TE
%A J. W. Shavlik
%A G. F. DeJong
%T Computer Understanding and Generalization of Symbolic Mathematical
Calculations: A Case Study in Physics Problem Solving
%J MAG61
%P 148-153
%K AA16 AA07 AI04
%A M. Hadzikadic
%A F. Lichtenberger
%A D. Y. Y. Yun
%T An Application of Knowledge-Base Technology in Education:
A Geometry Theorem Prover
%J MAG61
%P 141-147
%K AA13 AA07 T02 H01
%A J. S. Vitter
%A R. A. Simons
%T New Classes for Parallel Complexity: A Study of Unification and Other
Complete Problems for P [Script P]
%J IEEE Transactions on Computers
%D MAY 1986
%V C-35
%N 5
%P 403-418
%K AI11 O06 H03
%X Parallel Algorithms for Unification in $O ( E over P + V log P)$ or
$O( alpha (2E,V) E over P + V )$
where E is the number of edges and V is the number of vertices in the expression
graph and P is the number of processor and $alpha$ is the inverse ackerman's fun
ction.
%A Karen Sorensen
%T AT&T Leads New Scanner Parade
%J InfoWorld
%V 8
%N 19
%D MAY 12, 1986
%P 17
%K AI06 Vision Research Canon
%X AT&T has an Image Director for digitizing an 8.5 by 11 paper at 100 by 100
resolution for $2885.00 Vision Research has a 8.5 by 11 scanner for $2495.00.
Canon has a scanner for $1,190. OCR software is available for $595.00 to go
with it.
%T Study Says Australia Needs AI Development
%J InfoWorld
%V 8
%N 19
%D MAY 12, 1986
%P 30
%K AI16
%X According to an Australian government report, Australia has an international
strength in expert systems but needs help to commercialize their work.
%A Charles Babcock
%T Cobol-Based AI Shell Bows
%J ComputerWorld
%D SEP 1, 1986
%V 20
%N 35
%K McCormick and Dodge John B. Landry Distribution Management Systems T03 AI01
AT02
%X Distribution Management Systems (DMS) will be producing expert system shells
written in Cobol designed to be integrated into mainstream MIS.
Releases scheduled are DEC for October, MVS/CICS in January and the IBM PC
for first quarter 1987.
%T Fujitsui Commits to AI Market
%J ComputerWorld
%D SEP 1, 1986
%V 20
%N 35
%P 15
%K AT04 AT02 H01 T03 AI01 GA01
%X Fujitsu has an Expert System shell
running on its FM 16 Beta PC costing $2940 and oriented to the Japanese Language
%A Michael Sullivan-Trainor
%T In Depth
%J ComputerWorld
%D SEP 1, 1986
%V 20
%N 35
%P 55-62
%K AI01 AA18 AA06 Perks budget support personnel
%X describes development and functionality of expert system for budget analysis
for the US Navy and one to help Army force designers design how many support
personnel are needed.
%A Maura McEnaney
%T Lefebvre Signs on with Expert Systems Developer Cognitive
%J ComputerWorld
%D SEP 1, 1986
%V 20
%N 35
%P 95
%K AI01 AT11 AT16 Multimate
%X Richard Lefebvre, former chief operating officer for Multimate, is
now president and ceo of Cognitive Systems Inc..
%T Japanese Launch Language Project
%J InfoWorld
%D SEP 1, 1986
%V 8
%N 35
%P 18
%K AI02 GA01 Hitachi NEC Fujitsu Thailand Chinese
%X MITI will launch a 7 year 39 million effort to develop translation
systems between Japanese and other Asian languages.
%T The Next Revolution
%J ComputerWorld
%D SEP 15, 1986
%V 20
%N 37
%P 16
%K AA06 AT22
%X Editorial stating:
The recent announcement of an expert system shell written in COBOL and
designed to be integrated into mainframe applications signals the widespread
integration of artificial intellgence into MIS. MIS managers should
have their existing staffs get involved in AI and look out for applications
of AI to their shops. They should be "movers" and not "responders" during
this next phase of the computer revolution. [. ComputerWorld is a weekly
newspaper with one of the largest circulations in the computer commuity LEFF .]
%A Harvey P. Newquist
%T Forty-bit Architecture: Latest in Push for More Power
%J ComputerWorld
%D SEP 15, 1986
%V 20
%N 37
%P 17
%K H02 Integrated Inference SM45000
%X discusses the new Integrated Inference Machines SM45000 which uses
a 40 bit word
%T Artificial -intelligence Work Gains Mainstream Acceptance
%J The Institute
%V 10
%N 10
%P 1+
%D OCT 1986
%K AI07 Nils Nillson Feigenbaum Minsky McDermott Schorr IBM AT14
%X discusses statements by Edward Feigenbaum, Marvin Minsky, Drew McDermott and
Nils Nillson at the recent AAAI conference. Edward Feigenbaum "claimed that
every time an area of AI becomes successful, it is no longer considered
Artificial Intelligence." Marvin Minsky said that there isn't necessarily
anything corresponding to "intelligence." Nils Nilsson discussed robotics.
They applauded IBM for its "embracing" of AI and for acknowledging
university research. However, there was a complaint about a long conversation
with IBM represenatives in which they asked what new techniques that would
have applications to new products would come out of MIT in the next eighteen
months.
%A Charles Babcock
%T Landry Returns to the Fray
%J ComputerWorld
%D SEP 8, 1986
%V 20
%N 36
%P 19+
%K John Landry McCormack and Dodge Distribution Management Systems AA06
%X Discusses the head of Impact/AE which is the expert system for use in
COBOL environments.
%A Eddy Goldberg
%T AI Debuts Move Expert Systems into Mainstream Computing
%J ComputerWorld
%D SEP 8, 1986
%V 20
%N 36
%P 22
%K AAAI-86 Xerox CommonLoops Texas INstruments Vaxstation DEC Apollo Franz Aion
Lisp Machine H02 H01 T03 T01
%X reviews some announcements made at AAAI-86
%A MItch Betts
%T Archives Gets Expert System
%J ComputerWorld
%D SEP 8, 1986
%V 20
%N 36
%P 93
%K AA14
%X The National ARchives is developing a prototype expert system to help users
who have vague requests for information. In tests, the computer and the archivi
sts
agreed 66% of the time, 21% of the answers were achieved by the computer and not
the archivists and 13% of the time the archivists gave the answer and not the
computer and 7% of the time the computer was "simply wrong." The prototype
covers the old Bureau of Land Management records and was written with M.1
%T DEC Unveils HIgh-end VAX
%J ComputerWorld
%D SEP 8, 1986
%V 20
%N 36
%P 107
%K DEC T01
%X DEC introduced the AI Vaxstation/GPX which is a color version of the Microvax
II.
%T New Products/Software and Services
%J ComputerWorld
%D SEP 8, 1986
%V 20
%N 36
%P 111
%K T03 OPS5 Data Directions DDi-OPS Xerox 1100
%X Data Directions, 37 Jerome Ave., Bloomfield, Conn 06002, has released
an OPS-5 for the Xerox Corp. 1100 Lisp Machine costing $995.00
%A Alice LaPlante
%T Communications Program to Help Novices, Experts
%J infoWorld
%D SEP 8, 1986
%V 8
%N 36
%P 16
%K AA08 AI01 H01 AA15
%X Costing $49.95, is a system that helps users handling microcomputer
communications. It helps configure Smartcom II, Crosstalk, Concept Development'
s
Line Plus. It can help design a serial cable.
%T Symbolics Compiler Gets DOD Approval
%J Electronic News
%V 32
%N 1618
%D SEP 8, 1986
%P 26
%K H02 Symbolics Tempest Ada
%X DOD validated Symbolics' ADA compiler. This costs $3600.00.
They also brought out a Tempest version of their 3645 processor which will
cost $104,900
%T Commercializing AI Provides 16000 Jobs in US
%J Electronics
%D SEP 18, 1986
%P 23
%V 59
%N 13
%K AT04
%X There are 16000 people now working in US to commercialize AI technologies.
This excludes people in academe and research organizations.
%T Vision Processor on a Board Goes for $10,000
%J Electronics
%D SEP 18, 1986
%P 28
%V 59
%N 13
%K AI06 AT02
%X The 2000/VP costs $10,000 and is said to be comparable to $40,000 boards
%T Dainichi Kiko Asks Court Protection
%J Electronics
%D SEP 18, 1986
%P 114
%V 59
%N 13
%K AI07 AT16 GA01
%X This company, a fast growing Japanese robotic maker, sought court protection
from creditors.
%T Vision System Checks Assembled Boards
%J Electronics
%D SEP 18, 1986
%P 103
%V 59
%N 13
%K AI06 AA26 AA04
%X Intellidex has a new system that uses ten cameras to check PC boards.
%A Peggy Watt
%T AI Languages for Mac, IBM PC,VAX Introduced
%J ComputerWorld
%D SEP 22, 1986
%V 20
%N 38
%P 35
%K T02 H01 T01 AT02 logo macintosh VAX
%X Expertelligence will be selling an IBM PC version of prolog
and versions of LISP for Macintosh and VAX. The system uses Macintosh
like windows and pull down menus.
%A Pat Shipman
%T The Recent Life of an Ancient Dinosaur
%J Discover
%D OCT 1986
%V 7
%N 10
%K analogy functional anatomy AA10
%X shows where reasoning by analogy went wrong and where it went right in
determining the nature of the Iguanadon from various bones. He argues
that analogy from functional similarities is valid but that from
mere circumstantial reasoning where there is no causual relationship between
the sets of traits in question. Might be of interest to those developing
AI systems to reason by analogy.
%A D. Snyers
%A A. Thayse
%T Algorithmic State Machine Design and Automatic Theorem Proving: Two Dual
Approaches to the Same Activity
%J IEEE Transactions on Computers
%V C-35
%N 10
%D OCT 1986
%V C-35
%K AI11 AA08
%X Transformations acting on P-function can be interpreted in terms of
synthesizing programs consisting of if-then-else and do and theorem proving.
Attempts to show a relation between Prolog and logic design.
%A Ralph Emmett Carlyle
%T Sneaking in the Back
%J Datamation
%D OCT 1, 1986
%V 32
%N 19
%P 32+
%K AT02 Cullinet MSA McCormack and Dodge AA06 MIS Impact/AE Distribution
Management Incorporated AION CICS
%X MSA is adding rules-based software to Information Expert and will
be announcing stand alone systems for expert systems. Boole and Babbage
and others will be adding expert systems to their performance measurement
and capacity planning systems. Aion will be able to run its expert
systems under CICS. (Interview with Bob goldman of Artificial Intelligence
Corporation which markets Intellect. They are developing expert systems,
voice recognition systems and software to run under IBM's DB2 database
system.)
%A James T. Brady
%T A Theory of Productivity in the Creative Process
%J IEEE Computer Graphics and Applications
%D May 1986
%V 6
%N 5
%P 25-34
%K AI08 roll system response time
%X discusses the state of "being on a roll" where everything seems to go right.
Programmers and engineers who used terminals found that for programmers
system response time dropping from 2.5 to .3 seconds increased productivity
by a factor of two and for engineers in a graphic applications environment,
productivity could go up as much as nine times for a drop from 1.5 secons
to 0.3 seconds. Develops an analytical model to explain these empirical
results.
%A Ellis S. Cohen
%A Edward t. Smith
%A Lee A. Iverson
%T Constraint Based Tiled Windows
%J IEEE Computer Graphics and Applications
%D May 1986
%V 6
%N 5
%P 35-45
%K AI15 AI01
%X discusses uses ruled based techniques to automatically determine
size and locations of windows in a tile-based systems. This is where
windows do not "overlap" but are resized so they all fit together on
the rectangle that forms the screen.
%A G. J. Li
%A B. W. Wah
%T Coping with Anomalies in Parallel Branch-and-Bound Algorithms
%J IEEE Transactions on Computers
%D JUN 1986
%V C-35
%N 6
%P 568-573
%K H03 AI03
%X Sufficient conditions to guarantee no degradation in performance
due to parallelisma nd necessary conditions for allowing parallelism
to have a speedup greater than the number of processors is found.
%A Ed Winfield
%T Image Processing Prooducts for the Q=Bus Meet INdustry Needs for Precision
Inspection
%J Hardcopy
%V 6
%N 7
%D JUL 1986
%P 83-94
%K AI06 AT02
%X Data Translation
.br
DT2651 Frame Grabber 512 x 512, on-board ALU
.br
Datacube
.br
QVG-153 768x512 x 8 bit frame capture, can be expanded to 24 bits per pixel,
daughterboard to do processing
.br
Matrox
.br
QFAF-512 512 x 512 x 4
.br
Reticon
.br
SB6320 (interface to reticon's solid-state cameras)
.br
Imaging Technology
.br
AP512,FB512,ALU512, processor, display and converter for 512 by 512
(hardware for histograms and feature extractions
%A John Naughton
%T Artificial Intelligence: Can DEC Stay Ahead?
%J Hardcopy
%V 6
%N 7
%D JUL 1986
%P 113-117
%K AI01 AA26 AA21
%X IDT helps engineers locate field-replaceable units in PDP 11-03's.
(description of other of DEC's AI expert systems and experiences.)
%A T. F. Knoll
%A R. C. Jain
%T Recognizing Partially Visible Objects Using Feature Indexed Hypotheses
%J IEEE Journal of Robotics and Automation
%D MAR 1986
%V RA-2
%N 1
%P 3-13
%K AI06
%X develops an algorithm for isolating objects that partially match of
cost $O( sqrt p ) r$ where p is the number of possible objects and r is the
number of redundancies.
%A E. K. Wong
%A K. S. Fu
%T A Hierarchical Orthoganal Space Approach to Three-Dimensional Path Planning
%J IEEE Journal of Robotics and Automation
%D MAR 1986
%V RA-2
%N 1
%P 42-52
%K AI07 AI03 AI09
------------------------------
End of AIList Digest
********************
∂18-Oct-86 2246 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #224
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 18 Oct 86 22:46:41 PDT
Date: Sat 18 Oct 1986 20:30-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #224
To: AIList@SRI-STRIPE
AIList Digest Sunday, 19 Oct 1986 Volume 4 : Issue 224
Today's Topics:
Mathematics - PowerMath,
Learning - Neural Networks & Connectionist References,
Expert Systems - ESE
----------------------------------------------------------------------
Date: 15 Oct 86 19:30:47 GMT
From: hao!bill@seismo.css.gov (Bill Roberts)
Subject: Algebraic manipulators for the Mac
Has anyone in netland heard of any algebraic manipulator systems for the
MacIntosh? I recently saw were a company called Industrial Computations Inc.
of Wellesley, MA is marketing a program called "PowerMath". The ad reads
Type in your problem, using conventional math notation, and
PowerMath will solve your calculus, algebra and matrix
problems. PowerMath does factorials, summations, simultaneous
equations, plots, Taylor series, trigonometry and allows
unlimited number size.
That last statement ("...unlimited number size.") hints at PowerMath being a
symbolic computation engine as opposed to an equation solver like TKSolver.
Thanks in advance for any input.
Bill Roberts
NCAR/HAO
Boulder, CO
!hao!bill
"...most people spend their lives avoiding intense situations,
a Repo man seeks them out!"
------------------------------
Date: 12 Oct 86 23:10:00 GMT
From: uiucuxa!lenzini@uxc.cso.uiuc.edu
Subject: To: Bob Caviness
To : Bob Caviness
Sorry about this posting but I can't seem to get through to Bob Caviness
at the University of Del.
Here are a couple of integrals that you can cut MACSYMA loose on. I've been
trying to use the program myself but the results I've been getting are
unbelievably complex (Read 8 page constants that I can't seem to simplify).
Hopefully you have expanded the integration capabilities enough to handle this.
Thanks again.
inf
/
!
!(A + B*(x)↑(1/2))↑2 + (C*x + B*(x)↑(1/2))↑2 D
I = !--------------------------------------------- * ----------- cos(E*x)dx
1 !(A + B*(x)↑(1/2))↑2 + (x + B*(x)↑(1/2))↑2 D↑2 + x↑2
!
/
0
I = same integral as I without the cos(E*x) term
2 1
Any help would be greatly appreciated.
Thanks in advance.
Andy Lenzini
University of Illinois.
------------------------------
Date: 17 Oct 86 23:55:03 GMT
From: decvax!dartvax!merchant@ucbvax.Berkeley.EDU (Peter Merchant)
Subject: Re: Algebraic manipulators for the Mac
> ...I recently saw were a company called Industrial Computations Inc.
> of Wellesley, MA is marketing a program called "PowerMath". The ad reads
>
> Type in your problem, using conventional math notation, and
> PowerMath will solve your calculus, algebra and matrix
> problems. PowerMath does factorials, summations, simultaneous
> equations, plots, Taylor series, trigonometry and allows
> unlimited number size.
>
> That last statement ("...unlimited number size.") hints at PowerMath being a
> symbolic computation engine as opposed to an equation solver like TKSolver.
> Thanks in advance for any input.
> Bill Roberts
I had a chance to use PowerMath and was severely impressed. It does all sorts
of mathematical functions and has a very nice Macintosh interface. I have a
feeling, though, that is program was originally designed for a mainframe.
I would love to see PowerMath run on a Mac with a Prodigy upgrade, or maybe
a HyperDrive 2000. I used one on a 512K Mac and, while it was very good,
was the most slowest (yes, I meant to do that) program I had ever seen. The
program took minutes to do what TK!Solver seconds.
On the other hand, it did do everything it advertised. Made good graphs, too.
If time is not a problem for you, I'd really suggest it. If anyone has detes
on it running on a Prodigy upgrade, PLEASE LET ME KNOW!
--
"Do you want him?! Peter Merchant
Or Do you want me!?
'Cause I want you!"
------------------------------
Date: 18 Oct 86 15:00:39 GMT
From: clyde!watmath!watnot!watmum!bwchar@caip.rutgers.edu (Bruce Char)
Subject: Re: Algebraic manipulators for the Mac
There is an article by two of the authors of PowerMath in the Proceedings of
the 1986 Symposium on Symbolic and Algebraic Computation (sponsored by
ACM SIGSAM): "PowerMath, A System for the Macintosh", by J. Davenport and
C. Roth, pp. 13-15. Abstract from the paper:
PowerMath is a symbolic algebra system for the MacIntosh
computer. This paper outlines the design decisions that were
made during its development, and explains how the novel
MacIntosh environment helped and hindered the development of the
system. While the interior of PowerMath is fairly conventional, the
user interface has many novel features. It is these that make
PowerMath not just another microcomputer algebra system.
Bruce Char
Dept. of Computer Science
University of Waterloo
------------------------------
Date: 17 Oct 86 05:34:57 GMT
From: iarocci@eneevax.umd.edu (Bill Dorsey)
Subject: simulating a neural network
Having recently read several interesting articles on the functioning of
neurons within the brain, I thought it might be educational to write a program
to simulate their functioning. Being somewhat of a newcomer to the field of
artificial intelligence, my approach may be all wrong, but if it is, I'd
certainly like to know how and why.
The program simulates a network of 1000 neurons. Any more than 1000 slows
the machine down excessively. Each neuron is connected to about 10 other
neurons. This choice was rather arbitrary, but I figured the number of
connections would be proportional the the cube root of the number of neurons
since the brain is a three-dimensional object.
For those not familiar with the basic functioning of a neuron, as I under-
stand it, it functions as follows: Each neuron has many inputs coming from
other neurons and its output is connected to many other neurons. Pulses
coming from other neurons add or subtract to its potential. When the pot-
ential exceeds some threshold, the neuron fires and produces a pulse. To
further complicate matters, any existing potential on the neuron drains away
according to some time constant.
In order to simplify the program, I took several short-cuts in the current
version of the program. I assumed that all the neurons had the same threshold,
and that they all had the same time constant. Setting these values randomly
didn't seem like a good idea, so I just picked values that seemed reasonable,
and played around with them a little.
One further note should be made about the network. For lack of a good
idea on how to organize all the connections between neurons, I simply connect-
ed them to each other randomly. Furthermore, the determination of whether
a neuron produces a positive or negative pulse is made randomly at this point.
In order to test out the functioning of this network, I created a simple
environment and several inputs/outputs for the network. The environment is
simply some type of maze bounded on all sides by walls. The outputs are
(1) move north, (2) move south, (3) move west, (4) move east. The inputs are
(1) you bumped into something, (2) there's a wall to the north, (3) wall to
the south, (4) wall to the west, (5) wall to the east. When the neuron
corresponding to a particular output fires, that action is taken. When a
specific input condition is met, a pulse is added to the neuron corresponding
to the particular input.
The initial results have been interesting, but indicate that more work
needs to be done. The neuron network indeed shows continuous activity, with
neurons changing state regularly (but not periodically). The robot (!) moves
around the screen generally winding up in a corner somewhere where it occas-
ionally wanders a short distance away before returning.
I'm curious if anyone can think of a way for me to produce positive and
negative feedback instead of just feedback. An analogy would be pleasure
versus pain in humans. What I'd like to do is provide negative feedback
when the robot hits a wall, and positive feedback when it doesn't. I'm
hoping that the robot will eventually 'learn' to roam around the maze with-
out hitting any of the walls (i.e. learn to use its senses).
I'm sure there are more conventional ai programs which can accomplish this
same task, but my purpose here is to try to successfully simulate a network
of neurons and see if it can be applied to solve simple problems involving
learning/intelligence. If anyone has any other ideas for which I may test
it, I'd be happy to hear from you. Furthermore, if anyone is interested in
seeing the source code, I'd be happy to send it to you. It's written in C
and runs on an Atari ST computer, though it could be easily be modified to
run on almost any machine with a C compiler (the faster it is, the more
neurons you can simulate reasonably).
[See Dave Touretzky's message about connectionist references. -- KIL]
--
| Bill Dorsey |
| 'Imagination is more important than knowledge.' |
| - Albert Einstein |
| ARPA : iarocci@eneevax.umd.edu |
| UUCP : [seismo,allegra,rlgvax]!umcp-cs!eneevax!iarocci |
------------------------------
Date: 15 Oct 86 21:12 EDT
From: Dave.Touretzky@A.CS.CMU.EDU
Subject: the definitive connectionist reference
The definitive book on connectionism (as of 1986) has just been published
by MIT Press. It's called "Parallel Distributed Processing: Explorations in
the Microstructure of Cognition", by David E. Rumelhart, James H. McClelland,
and the PDP research group. If you want to know about connectionist models,
this is the book to read. It comes in two volumes, at about $45 for the set.
For other connectionist material, see the proceedings of IJCAI-85 and the
1986 Cognitive Science Conference, and the January '85 issue of the
journal Cognitive Science.
-- Dave Touretzky
PS: NO, CONNECTIONISM IS NOT THE SAME AS PERCEPTRONS. Perceptrons were
single-layer learning machines, meaning they had an input layer and an
output layer, with exactly one learning layer in between. No feedback paths
were permitted between units -- a severe limitation. The learning
algorithms were simple. Minsky and Papert wrote a well-known book showing
that perceptrons couldn't do very much at all. They can't even learn the
XOR function. Since they had initially been the subject of incredible
amounts of hype, the fall of perceptrons left all of neural network
research in deep disrepute among AI researchers for almost two decades.
In contrast to perceptrons, connectionist models have unrestricted
connectivity, meaning they are rich in feedback paths. They have rather
sophistcated learning rules, some of which are based on statistical
mechanics (the Boltzmann machine learning algorithm) or information
theoretic measures (G-maximization learning). These models have been
enriched by recent work in physics (e.g., Hopfield's analogy to spin
glasses), computer science (simulated annealing search, invented by
Kirkpatrick and adapted to neural nets by Hinton and Sejnowski), and
neuroscience (work on coarse coding, fast weights, pre-synaptic
facilitation, and so on.)
Many connectionist models perform cognitive tasks (i.e., tasks related to
symbol processing) rather than pattern recognition; perceptrons were mostly
used for pattern recognition. Connectionist models can explain certain
psychological phenomena that other models can't; for an example, see
McClelland and Rumelhart's word recognition model. The brain is a
connectionist model. It is not a perceptron.
Perhaps the current interest in connectionist models is just a passing fad.
Some folks are predicting that connectionism will turn out to be another
spectacular flop -- Perceptrons II. At the other extreme, some feel the
initial successes of ``the new connectionists'' may signal the beginning of
a revolution in AI. Read the journals and decide for yourself.
------------------------------
Date: 15 Oct 86 06:17:23 GMT
From: zeus!levin@locus.ucla.edu (Stuart Levine)
Subject: Re: Expert System Wanted
In article <2200003@osiris> chandra@osiris.CSO.UIUC.EDU writes:
>There is an expert system shell for CMS. It is called PRISM.
>PRISM is also called ESE (expert system environemnt).
>ESE is available from IBM itself. It is written in lisp and was most
>probably developed at IBM Watson Research Labs.
>
Could you give us more info. When we checked into the availability
of PRISM, we found that IBM was NOT making it available.
It would be interesting to know if that has changed.
Also, does it run in LISP (as in a lisp that someone would actually
own), or in IBM LISP?
------------------------------
Date: 15 October 1986, 20:54:09 EDT
From: "Fredrick J. Damerau" <DAMERAU@ibm.com>
Subject: correction on ESE
ESE, (Expert System Environment),
is actually PASCAL-based, not LISP, and was developed at
the Palo Alto Scientific Center, not Yorktown Research.
Fred J. Damerau, IBM Research (Yorktown)
------------------------------
Date: Wed 15 Oct 86 17:05:23-PDT
From: Matt Pallakoff <PALLAKOFF@SUMEX-AIM.ARPA>
Subject: corrections to Navin Chandra note on AIList Digest
Navin,
I saw your note on IBM's expert system environment (ESE). I worked
one summer with the group that developed it. First, it's no longer
called PRISM. They changed that fine name, used throughout the
research and development, to Expert System Development Environment/
Expert System Consultation Environment, the two subsystems of ESE which
are sold separately or together. (I don't think they have reversed this
decision since I left.)
Secondly, It is written in PASCAL, not LISP. Finally, it was created
at the IBM Research Center in Palo Alto, California (where I worked).
I don't know a tremendous amount about it (having spent only a couple
months working on interfaces to it) but I might be able to give you some
general answers to specific questions about it.
Matt Pallakoff
------------------------------
End of AIList Digest
********************
∂19-Oct-86 0043 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #225
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 19 Oct 86 00:43:19 PDT
Date: Sat 18 Oct 1986 20:50-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #225
To: AIList@SRI-STRIPE
AIList Digest Sunday, 19 Oct 1986 Volume 4 : Issue 225
Today's Topics:
Queries - Statistical Expert Systems &
Workshop on AI in Natural Resources and Environmental Planning,
Expert Systems - Savior & FRL,
Bibliography - Correction
----------------------------------------------------------------------
Date: Fri, 17 Oct 86 09:18:10 SET
From: "Adlassnig, Peter"
Subject: Statistical expert systems
We are interested in building a statistical expert system that is
intended to be used by physicians from our Medical School.
We would like to obtain information about
1) statistical expert systems in general
a) at universities and laboratories
b) commercially available systems
2) statistical expert systems in medicine
Any information is appreciated.
Peter Adlassnig, Department of Medical Computer Sciences, University of
Vienna Medical School
------------------------------
Date: 16 Oct 86 14:49:00 GMT
From: osiris!chandra@uxc.cso.uiuc.edu
Subject: GIS, Environmental, Nat. Resources
Applications of AI in NATURAL RESOURCES and
ENVIRONMENTAL PLANNING
Hi,
Recently, Molly Stock (of Univ. of Idaho) conducted a survey of AI applications
to forestry management. Her findings are pleasantly surprising. A
large number of universities and government agencies are currently building
interesting AI applications.
Sparked by the survey, we are now planning on holding a
NATIONAL WORKSHOP in this area. The purpose of this workshop is to
bring researchers together under one roof. We envision this workshop
as an opportunity for researchers to share ideas and lay down
directions for future research.
This letter is a probe. I'm trying to get a sense of "WHAT'S OUT
THERE". The areas covered are:
- Environmental Planning & management.
- Env Impact Statements
- Geographic Info Systems
- Natural Resource planning
- Environmental Modeling
- Other related areas
If you are involved in any such research and/or would be interested in
participating in a WORKSHOP, please contact me at the address below:
US-MAIL:
D. Navinchandra
Intelligent Engineering Systems Lab.
Room 1-241
Massachusetts Institute of Technology,
Cambridge, MA02139
ARPA-NET:
dchandra@athena.mit.edu
Phone:
(617)577-8047 (call first)
(617)253-3880 (call if no answer at above number)
If you are currently involved in some projects and or have technical
reports, I'd like to know about them. After this survey is complete,
a formal Announcement will be promulgated.
THANKS
D. Navinchandra
IESL, MIT
(p.s. At the IESL,MIT we are working on building tools to build
Knowledge Based systems for Geographic Info Systems. We are doing this
research in collaboration with the Environmental group of the
Construction Engineering Research Lab, Champaign Illinois)
------------------------------
Date: Thu, 16 Oct 86 10:36:21 BST
From: J W T Smith (JWS AT UKACRL) <JWS%ibm-b.rutherford.ac.uk@Cs.Ucl.AC.UK>
Subject: General purpose ES for VM
Rutherford Appleton Laboratory, R1, 2.81, Ext 6487
In response to the request from Linda Littleton of PSU for information on an
Expert System for VM/CMS.
The Expert System shell called Savior is available for VM. It also runs on
PCs, Vaxes, Primes and other minis and micros. We have only used the PC
version.
The cost (for the VM version) in the UK is 15k pounds but they offer a good
educational discount, at least they do in the UK.
The producing company is,
ISI Ltd
11 Oakdene Road
Redhill
Surrey RH1 7BT.
United Kingdom.
Telephone 0737 71327
I'm afraid I don't have a US address for ISI.
John Smith.
Bitnet: JWS at UKACRL
------------------------------
Date: 16 Oct 86 16:51:00 GMT
From: mcvax!ukc!einode!robert@seismo.css.gov (Robert Cochran)
Subject: Re: Public Domain Software for Expert Systems
>> From: meh@hou2d.UUCP (P.MEHROTRA)
>> Date: 11 Oct 86 22:06:27 GMT
>> Hi: I am looking for some software available in public domain
>> for building expert systems. I work in Unix environment and
>> have Franz LISP on my system. I already have OPS5. I am especially
>> interested in tools which use frames and/or semantic networks
>> for knowledge representation.
There is a full implementation of a Frame Representation Language being
distributed by a company in Ireland called Generics (Software) Ltd.
This FRL runs in any advanced lisp dialect - CommonLISP, FranzLISP, etc. -
on machines ranging from an IBM-PC to microVAX and VAX.
It's not exactly public domain stuff but it's not expensive
either ($300 - $500), and special licences are available for educational
institutions.
If interested, I suggest you contact them directly for further information
at : ....!mcvax!einode!genrix!mcgee.
------------------------------
Date: 16 Oct 86 09:55 EDT
From: WAnderson.wbst@Xerox.COM
Subject: Incorrect AIList Bibliographic Reference
The following reference was passed on to me from one of the AILists (I
don't know which one).
%A Klaus-Peter Adlassnig
%T Fuzzy Set Theory in Medical Diagnosis
%J IEEE Transactions on Software Engineering
%D NOV 1985
%V SE-11
%N 11
%P 260-265
%K AA01 AI01 O04
%X They developed systems for diagnosing rheumatologic diseases and
pancreatic
disorders. They achieved 94.5 and 100 percent accuracy, respectively.
I tried looking this up in my collection of IEEE Trans on SE, but it's
not there. Nov 1985 is an issue on AI, but there is no mention of Fuzzy
Set theory. In addition, the Nov 85 issue begins with page 1253. Also,
a perusal of the index of the Transactions for 1985 reveals no author
with the name Klaus-Peter Adlassnig, and only one entry in the subject
index under Fuzzy Sets: "Estimating in correctness of computer program
viewed as set of heirarchically structured fuzzy equivalence classes, by
F.B. Bastani, Sep 85, pp 857-864." Finally, pages 260 - 265 of Vol
SE-11 contain an article by David Parnas, et. al., titled "The Modular
Structure of Complex Systems."
So here we have an example of an incorrect, online, bibliographic
reference. I wonder how many other mistakes are made when this sort of
data is entered. This is progress? ("Our entire card catalog is
online, but we still can't find anything ...." :-)
I wonder if this is in the new IEEE publication on expert systems,
Expert Magazine?
Bill Anderson
------------------------------
Date: WED, 10 JAN 84 17:02:23 CDT
From: E1AR0002%SMUVM1.BITNET@WISCVM.WISC.EDU
Subject: mistake
The corrected citations are indicated below. Sadly, several references
from an issue of IEEE Transactions on Systems, Man and Cybernetics got
misattributed to an issue of IEEE Transactions on Software Engineering
within ai.bib36 due to an editing mistake.
Of the 2600 references sent out in this format, this is first complaint
I got about an error in one of the citations.
Of the three complaints I got about mistakes in a summary of something
I sent in, only one turned out to be my fault. The other two statements
were checked successfully against the article in question so the error was
on the part of the original author. I thus consider my error rate reasonable.
Typing in references by hand is something that I will probably only be
doing for a few more years. I suspect by then, I will get an optical
disk with all the journals on them and extract the informaiton
directly. One publisher is already putting a bar-code like strip with
the table of contents in issues of their magazines.
I wonder whether it would be legal for someone to get a selective
dissemination service from a database provider like Dialog or ISI and
pipe that into AIList. I believe SIGART publishes the result of a
search of dissertation abstracts for AI related material on a regular
basis and SIGPLAN used to do the same for the NTIS database for
programming language materials.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
%A G. R. Dattatreya
%A L. N. Kanal
%T Adaptive Pattern Recognition with Random Costs and Its Applications to
Decision Trees
%J IEEE Transactions on Systems, Man and Cybernetics
%D MAR/APR 1986
%V SMC-16
%N 2
%P 208-218
%K AI06 AA01 AI04 AI01 clustering spina bifida bladder radiology
%X applies clustering algorithm to results of reading radiographs of
the bladder. The system was able to determine clusters that corresponded
to those of patients with spina bifida.
%A Klaus-Peter Adlassnig
%T Fuzzy Set Theory in Medical Diagnosis
%J IEEE Transactions on Systems, Man and Cybernetics
%D MAR/APR 1986
%V SMC-16
%N 2
%K AA01 AI01 O04
%X They developed systems for diagnosing rheumatologic diseases and pancreatic
disorders. They achieved 94.5 and 100 percent accuracy, respectively.
%A William E. Pracht
%T GISMO: A Visual PRoblem Structuring and Knowledge-Organization Tool
%J IEEE Transactions on Systems, Man and Cybernetics
%D MAR/APR 1986
%V SMC-16
%N 2
%P 265-270
%K AI13 AI08 Witkin Geft AA06
%X discusses the use of a system for displaying effect diagrams on
decision making in a simulated business environment. The tool improved
net income production. The tool provided more assistance to those
who were more analytical than to those who used heuristic reasoning as
measured by the Witkin GEFT.
%A Henri Farreny
%A Henri Prade
%T Default and Inexact Reasoning with Possiblity Degrees
%J IEEE Transactions on Systems, Man and Cybernetics
%D MAR/APR 1986
%V SMC-16
%N 2
%P 270-276
%K O04 AI01 AA06
%X discusses storing for each proposition, a pair consisting of the
probability that it is true
and probability that it is false where these two probabilities do not
necessarily add up to 1. Inference rules have been developed for such
a system including analogs to modus ponens, modus tollens and how to
combine two such ordered pairs applying to the same fact. These have
been applied to an expert system in financial analysis.
%A Chelsea C. White, III
%A Edward A. Sykes
%T A User Preference Guided Approach to Conflict Resolution in
Rule-Based Expert Systems
%J IEEE Transactions on Systems, Man and Cybernetics
%D MAR/APR 1986
%V SMC-16
%N 2
%P 276-278
%K AI01 multiattribute utility theory
%X discusses an application of multiattribute utility theory to
resolve conflicts between rules in an expert system.
------------------------------
End of AIList Digest
********************
∂19-Oct-86 0252 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #226
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 19 Oct 86 02:52:01 PDT
Date: Sat 18 Oct 1986 21:01-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #226
To: AIList@SRI-STRIPE
AIList Digest Sunday, 19 Oct 1986 Volume 4 : Issue 226
Today's Topics:
Logic Programming - Proof of TMS Termination,
Philosophy - Review of Nagel's Book &
Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: Thu, 16 Oct 86 09:20 EDT
From: David A. McAllester <DAM@OZ.AI.MIT.EDU>
Subject: TMS Query Response
I saw a recent message concerning the termination of
belief revision in a Doyle-style TMS. Some time ago I
proved that determining the existence of a fixed point
for a set of Doyle justifications is NP-complete. It
is possible to give a procedure that terminates but
all such procedures will have exponential worst case
behaviour. The proof is given below:
***********************************************************
DEFINITIONS:
A NM-JUSTIFICATION is an "implication" of the form:
(IN-DEPENDENCIES, OUT-DEPENDENCIES) => N
where IN-DEPENDENCIES and OUT-DEPENDENCIES are sets of nodes and N is
the justified node.
A labeling L marks every node as either "in" or "out". An
nm-justification is said to be "active" under a labeling L if every out
dependency in the justification is labeled "out" and every in dependency
of the justification is labeled "in".
Let J be a set of nm-justifications and L be a labeling. We say that a
node n is JUSTIFIED under J and L if there is some justification for n
which is active under the labeling L.
A set J of nm-justifications will be called Doyle Satisfiable if there
is a labeling L such that every justified node is "in" and every node
which is not justified is "out".
*******************
THEOREM: The problem of determining the Doyle satisfiability
of a set J of nm-justifications is NP-complete.
*******************
PROOF: PSAT can be reduced to Doyle satisfiability as follows:
Let C be any set of propositional clauses (i.e. a problem in PSAT).
For each atomic proposition symbol P appearing in C let P and
nP be two nodes and construct the following justifications:
({}, {nP}) => P (i.e. if nP is "out" then P is justified)
({}, {P}) => nP (i.e. if P is "out" then nP is justified)
We introduce an additional node F (for "false") and for each clause
(L1 or L2 ... or LN) in C we construct the justification:
({nL1, nL2, ... nLn} {F}) => F
where the node nLj is the node nP if Lj is the symbol P and nLj is the
node P if Lj is the literal (NOT P).
The set J of nm-justifications constructed in this way is
Doyle-Satisfiable iff the original set C is propositionally satisfiable.
To verify this last claim note that if L is a labeling which satisfies J
then exactly one of P and nP is "in"; if P is "out" then nP must
be "in" and vice versa, and if P is "in" then nP must be "out" and vice
versa.
Next note that if L is a labeling satisfying J then F must be
"out"; if F were "in" then no justification for F would be active
contradicting the requirement that F is not justified then F is
not "in".
Finally note that a labeling L satisfies J just in case none of the
justifications for F are active, i.e. just in case the corrosponding
truth assignment to the proposition symbols in C satisfies every clause.
**************
David McAllester
------------------------------
Date: 16 Oct 86 07:21:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Book alert
This week's New Republic has a review of Thomas Nagel's (of
What-is-it-like-to-be-a-bat fame) new book, "The View from
Nowhere". For those interested in the philosophical issues
associated with the objective/subjective distinction, it
sounds like it's worth reading.
John Cugini <Cugini@nbs-vms>
------------------------------
Date: 15 Oct 86 23:17:57 GMT
From: mnetor!utzoo!utcsri!utai!me@seismo.css.gov (Daniel Simon)
Subject: Re: Searle, Turing, Symbols, Categories
In article <167@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>In response to my article <160@mind.UUCP>, Daniel R. Simon asks:
>
>> 1) To what extent is our discernment of intelligent behaviour
>> context-dependent?...Might not the robot version [of the
>> turing test] lead to the...problem of testers being
>> insufficiently skeptical of a machine with human appearance?
>> ...Is it ever possible to trust the results of any
>> instance of the test...?
>
>My reply to these questions is quite explicit in the papers in
>question:
>The turing test has two components, (i) a formal, empirical one,
>and (ii) an informal, intuitive one. The formal empirical component (i)
>is the requirement that the system being tested be able to generate human
>performance (be it robotic or linguistic). That's the nontrivial
>burden that will occupy theorists for at least decades to come, as we
>converge on (what I've called) the "total" turing test -- a model that
>exhibits all of our robotic and lingistic capacities.
By "nontrivial burden", do you mean the task of defining objective criteria
by which to characterize "human performance"? If so, you are after the same
thing as I am, but I fail to see what this has to do with the Turing test as
originally conceived, which involved measuring up AI systems against observers'
impressions, rather than against objective standards. Apparently, you're not
really defending the Turing test at all, but rather something quite different.
Moreover, you haven't said anything concrete about what this test might look
like. On what foundation could such a set of defining characteristics for
"human performance" be based? Would it define those attributes common to all
human beings? Most human beings? At least one human being? How would we
decide by what criteria to include observable attributes in our set of "human"
ones? How could such attributes be described? Is such a set of descriptions
even feasible? If not, doesn't it call into question the validity of seeking
to model what cannot be objectively characterized? And if such a set of
describable attributes is feasible, isn't it an indispensable prerequisite for
the building of a working Turing-test-passing model?
Please forgive my impertinent questions, but I haven't read your articles, and
I'm not exactly clear about what this "total" Turing test entails.
>The informal,
>intuitive component (ii) is that the system in question must perform in a
>way that is indistinguishable from the performance of a person, as
>judged by a person.
>
>Now the only reply I have for the sceptic about (ii) is
>that he should remember that he has nothing MORE than that to go on in
>the case of any other mind than his own. In other words, there is no
>rational reason for being more sceptical about robots' minds (if we
>can't tell their performance apart from that of people) than about
>(other) peoples' minds. The turing test is ALREADY the informal way we
>contend with the "other-minds" problem [i.e., how can you be sure
>anyone else but you has a mind, rather than merely acting AS IF it had
>a mind?], so why should we demand more in the case of robots? ...
>
I'm afraid I must disagree. I believe that people in general dodge the "other
minds" problem simply by accepting as a convention that human beings are by
definition intelligent. For example, we use terms such as "autistic",
"catatonic", and even "sleeping" to describe people whose behaviour would in
most cases almost certainly be described as unintelligent if exhibited by a
robot. Such people are never described as "unintelligent" in the sense of the
word that we would use to describe a robot who showed the exact same behaviour
patterns. Rather, we imply by using these terms that the people being
described are human, and therefore *would* be behaving intelligently, but for
(insert neurophysiological/psychological explanation here). This implicit
axiomatic attribution of intelligence to humans helps us to avoid not only
the "other minds" problem, but also the problem of assessing intelligence
despite the effect of what I previously referred to loosely as the "context" of
our observations. In short, we do not really use the Turing test on each
other, because we are all well acquainted with how easily we can be fooled by
contextual traps. Instead, we automatically associate intelligence with human
beings, thereby making our intuitive judgment even less useful to the AI
researcher working with computers or robots.
>As to "context," as I argue in the paper, the only one that is
>ultimately defensible is the "total" turing test, since there is no
>evidence at all that either capacities or contexts are modular. The
>degrees of freedom of a successful total-turing model are then reduced
>to the usual underdetermination of scientific theory by data. (It's always
>possible to carp at a physicist that his theoretic model of the
>universe "is turing-indistinguishable from the real one, but how can
>you be sure it's `really true' of the world?")
>
Wait a minute--You're back to component (i). What you seem to be saying is
that the informal component (component (ii)) has no validity at all apart from
the "context" of having passed component (i). The obvious conclusion is that
component (ii) is superfluous; any system that passes the "total Turing test"
exhibits "human behaviour", and hence must by definition be indistinguishable
from a human to another human.
>> 2) Assuming that some "neutral" context can be found...
>> what does passing (or failing) the Turing test really mean?
>
>It means you've successfully modelled the objective observables under
>investigation. No empirical science can offer more. And the only
>"neutral" context is the total turing test (which, like all inductive
>contexts, always has an open end, namely, the everpresent possibility
>that things could turn out differently tomorrow -- philosophers call
>this "inductive risk," and all empirical inquiry is vulnerable to it).
>
Again, you have all but admitted that the "total" Turing test you have
described has nothing to do with the Turing test at all--it is a set of
"objective observables" which can be verified through scientific examination.
The thoughtful examiner and "comparison human" have been replaced with
controlled scientific experiments and quantifiable results. What kinds of
experiments? What kinds of results? WHAT DOES THE "TOTAL TURING TEST"
LOOK LIKE?
>> 3) ...are there more appropriate means by which we
>> could evaluate the human-like or intelligent properties of an AI
>> system? ...is it possible to formulate the qualities that
>> constitute intelligence in a manner which is more intuitively
>> satisfying than the standard AI stuff about reasoning, but still
>> more rigorous than the Turing test?
>
>I don't think there's anything more rigorous than the total turing
>test since, when formulated in the suitably generalized way I
>describe, it can be seen to be identical to the empirical criterion for
>all of the objective sciences...
>
>Stevan Harnad
>princeton!mind!harnad
One question you haven't addressed is the relationship between intelligence and
"human performance". Are the two synonymous? If so, why bother to make
artificial humans when making natural ones is so much easier (not to mention
more fun)? And if not, how does your "total Turing test" relate to the
discernment of intelligence, as opposed to human-like behaviour?
I know, I know. I ask a lot of questions. Call me nosy.
Daniel R. Simon
"We gotta install database systems
Custom software delivery
We gotta move them accounting programs
We gotta port them all to PC's...."
------------------------------
Date: 14 Oct 86 16:01:44 GMT
From: ssc-vax!bcsaic!michaelm@beaver.cs.washington.edu
Subject: Re: Searle, Turing, Symbols, Categories
In article <167@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>...since there is no
>evidence at all that either capacities or contexts are modular.
Maybe I'm reading this out of context (not having read your books or papers),
but could you explain this statement? I know of lots of evidence for the
modularity of various aspects of linguistic behavior. In fact, we have a
parser + grammar of English here that captures a large portion of English
syntax, but has absolutely no semantics (yet). That is, it could parse
Jabberwocky or your article (well, I can't quite claim that it would parse
*all* of either one!) without having the least idea that your article is
meaningful whereas Jabberwocky isn't (apart from an explanation by Humpty
Dumpty). On the other hand, it wouldn't parse something like "book the table
on see I", despite the fact that we might make sense of the latter (because
of our world knowledge). Likewise, human aphasics often show similar deficits
in one or another area of their speech or language understanding. If this
isn't modular, what is? But as I say, maybe I don't understand what you
mean by modular...
--
Mike Maxwell
Boeing Advanced Technology Center
...uw-beaver!uw-june!bcsaic!michaelm
------------------------------
Date: 16 Oct 86 06:17:51 GMT
From: rutgers!princeton!mind!harnad@spam.ISTC.SRI.COM (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
In reply to a prior iteration D. Simon writes:
> I fail to see what [your "Total Turing Test"] has to do with
> the Turing test as originally conceived, which involved measuring
> up AI systems against observers' impressions, rather than against
> objective standards... Moreover, you haven't said anything concrete
> about what this test might look like.
How about this for a first approximation: We already know, roughly
speaking, what human beings are able to "do" -- their total cognitive
performance capacity: They can recognize, manipulate, sort, identify and
describe the objects in their environment and they can respond and reply
appropriately to descriptions. Get a robot to do that. When you think
he can do everything you know people can do formally, see whether
people can tell him apart from people informally.
> I believe that people in general dodge the "other minds" problem
> simply by accepting as a convention that human beings are by
> definition intelligent.
That's an artful dodge indeed. And do you think animals also accept such
conventions about one another? Philosophers, at least, seem to
have noticed that there's a bit of a problem there. Looking human
certainly gives us the prima facie benefit of the doubt in many cases,
but so far nature has spared us having to contend with any really
artful imposters. Wait till the robots begin giving our lax informal
turing-testing a run for its money.
> What you seem to be saying is that [what you call]
> the informal component [(i) of the turing test --
> i. e., indistinguishability from a person, as judged by a
> person] has no validity at all apart from the "context" of
> having passed [your] component (i) [i.e., the generation of
> our total cognitive performance capacity]. The obvious
> conclusion is that component (ii) is superfluous.
It's no more superfluous than, say, the equivalent component in the
design of an artificial music composer. First you get it to perform in
accordance with what you believe to be the formal rules of (diatonic)
composition. Then, when it successfully performs according to the
rules, see whether people like its stuff. Peoples' judgments, after
all, were not only the source of those rules in the first place, but
without the informal aesthetic sense that guided them, the rules would
amount to just that -- meaningless acoustic syntax.
Perhaps another way of putting it is that I doubt that what guides our
informal judgments (and underlies our capacities) can be completely
formalized in advance. The road to Total-Turing Utopia will probably
be a long series of feedback cycles between the formal and informal
components of the test before we ever achieve our final passing grade.
> One question you haven't addressed is the relationship between
> intelligence and "human performance". Are the two synonymous?
> If so, why bother to make artificial humans... And if not, how
> does your "total Turing test" relate to the discernment of
> intelligence, as opposed to human-like behaviour?
Intelligence is what generates human performance. We make artificial
humans to implement and test our theories about the substrate of human
performance capacity. And there's no objective difference between
human and (turing-indistinguishably) human-like.
> WHAT DOES THE "TOTAL TURING TEST" LOOK LIKE?... Please
> forgive my impertinent questions, but I haven't read your
> articles, and I'm not exactly clear about what this "total"
> Turing test entails.
Try reading the articles.
******
I will close with an afterthought on "blind" vs. "nonblind" turing
testing that I had after the last iteration:
In the informal component of the total turing test it may be
arguable that a sceptic would give a robot a better run for its money
if he were pre-alerted to the possibility that it was a robot (i.e., if the
test were conducted "nonblind" rather than "blind"). That way the robot
wouldn't be inheriting so much of the a priori benefit of the doubt that
had accrued from our lifetime of successful turing-testing of biological
persons of similar appearance (in our everyday informal solutions to
the "other-minds" problem). The blind/nonblind issue does not seem critical
though, since obviously the turing test is an open-ended one (and
probably also, like all other empirical conjectures, confirmable only
as a matter of degree); so we probably wouldn't want to make up our minds
too hastily in any case. I would say that several years of having lived
amongst us, as in the sci-fi movies, without arousing any suspicions -- and
eliciting only shocked incredulity from its close friends once the truth about
its roots was revealed -- would count as a pretty good outcome on a "blind"
total turing test.
Stevan Harnad
princeton!mind!harnad
------------------------------
End of AIList Digest
********************
∂19-Oct-86 0434 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #227
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 19 Oct 86 04:34:05 PDT
Date: Sat 18 Oct 1986 21:10-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #227
To: AIList@SRI-STRIPE
AIList Digest Sunday, 19 Oct 1986 Volume 4 : Issue 227
Today's Topics:
Philosophy - Searle, Turing
----------------------------------------------------------------------
Date: 16 Oct 86 09:10:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: yet more wrangling on Searle, Turing, ...
> Date: 10 Oct 86 13:47:46 GMT
> From: rutgers!princeton!mind!harnad@think.com (Stevan Harnad)
> Subject: Re: Searle, Turing, Symbols, Categories
>
> It is not always clear which of the two components a sceptic is
> worrying about. It's usually (ii), because who can quarrel with the
> principle that a veridical model should have all of our performance
> capacities? Now the only reply I have for the sceptic about (ii) is
> that he should remember that he has nothing MORE than that to go on in
> the case of any other mind than his own. In other words, there is no
> rational reason for being more sceptical about robots' minds (if we
> can't tell their performance apart from that of people) than about
> (other) peoples' minds.
This just ain't so... if we know, as we surely do, that the internals
of the robot (electronics, metal) are quite different from those
of other passersby (who presumably have regular ole brains), we might
well be more skeptical that robots' "consciousness" is the same as
ours. Briefly, I know:
1. that I have a brain
2. that I am conscious, and what my consciousness feels like
3. that I am capable of certain impressive types of performance,
like holding up my end of an English conversation.
It seems very reasonable to suppose that 3 depends on 2 depends
on 1. But 1 and 3 are objectively ascertainable for others as
well. So if a person has 1 and 3, and a robot has 3 but NOT 1,
I certainly have more reason to believe that the person has 2, than
that the robot does. One (rationally) believes other people are
conscious BOTH because of their performance and because their
internal stuff is a lot like one's own.
I am assuming here that "mind" implies consciousness, ie that you are
not simply defining "mind" as a set of external capabilities. If you
are, then of course, by (poor) definition, only external performance
is relevant. I would assert (and I think you would agree) that to
state "X has a mind" is to imply that X is conscious.
> ....So, since we have absolutely no intuitive idea about the functional
> (symbolic, nonsymbolic, physical, causal) basis of the mind, our only
> nonarbitrary basis for discriminating robots from people remains their
> performance.
Again, we DO have some idea about the functional basis for mind, namely
that it depends on the brain (at least more than on the pancreas, say).
This is not to contend that there might not be other bases, but for
now ALL the minds we know of are brain-based, and it's just not
dazzlingly clear whether this is an incidental fact or somewhat
more deeply entrenched.
> I don't think there's anything more rigorous than the total turing
> test ... Residual doubts about it come from
> four sources, ... (d) misplaced hold-outs for consciousness.
>
> Finally, my reply to (d) [mind bias] is that holding out for
> consciousness is a red herring. Either our functional attempts to
> model performance will indeed "capture" consciousness at some point, or
> they won't. If we do capture it, the only ones that will ever know for
> sure that we've succeeded are our robots. If we don't capture it,
> then we're stuck with a second level of underdetermination -- call it
> "subjective" underdetermination -- to add to our familiar objective
> underdetermination (b)...[i.e.,]
> there may be a further unresolvable uncertainty about whether or not
> they capture the unobservable basis of everything (or anything) that is
> subjectively observable.
>
> AI, robotics and cognitive modeling would do better to learn to live
> with this uncertainty and put it in context, rather than holding out
> for the un-do-able, while there's plenty of the do-able to be done.
>
> Stevan Harnad
> princeton!mind!harnad
I don't quite understand your reply. Why is consciousness a red herring
just because it adds a level of uncertainty?
1. If we suppose, as you do, that consciousness is so slippery that we
will never know more about its basis in humans than we do now, one
might still want to register the fact that our basis for belief in
the consciousness of competent robots is more shaky than for that
in humans. This reservation does not preclude the writing of further
Lisp programs.
2. But it's not obvious to me that we will never know more than we do
now about the relation of brain to consciousness. Even though any
correlations will ultimately be grounded on one side by introspection
reports, it does not follow that we will never know, with reasonable
assurance, which aspects of the brain are necessary for consciousness
and which are incidental. A priori, no one knows whether, eg,
being-composed-of-protein is incidental or not. I believe this is
Searle's point when he says that the brain may be as necessary for
consciousness as mammary glands are for lactation. Now at some level
of difficulty and abstraction, you can always engineer anything with
anything, ie make a computer out of play-doh. But the "multi-
realizability" argument has force only if its obvious (which it
ain't) that the structure of the brain at a fairly high level (eg
neuron networks, rather than molecules), high enough to be duplicated
by electronics, is what's important for consciousness.
John Cugini <Cugini@NBS-VMS>
------------------------------
Date: 16 Oct 86 07:14:26 PDT (Thursday)
From: "charles←kalish.EdServices"@Xerox.COM
Subject: Turing Test(s?)
Maybe we should start a new mail group where we try to convince each
other that we understand the turing test if everybody fails we go back
to the drawing board and design a new test.
And as the first entry:
In response to Daniel Simon's questioning of the appropriateness of this
test, I think the answer is that the Turing test is acceptable because
that's how we recognize each other as intelligent beings. Usually we
don't do it in a rigorous way because everybody always passes it. But
if I ask you to "please pass the Cheez-whiz" and you respond "Anita
Eckbart is marinating her poodle" then I would get a little suspicious
and ask more questions designed to figure out whether you're joking,
sick, hard of hearing, etc. Depending on your answers I may decide to
downgrade your status to less than full personhood.
About Stevan Harnad's two kinds of Turing tests: I can't really see
what difference the I/O methods of your system makes. It seems that the
relevant issue is what kind of representation of the world it has.
While I agree that to really understand the system would need some
non-purely conventional representation (not semantic if "semantic" means
"not operable on in a formal way" as I believe [given the brain is a
physical system] all mental processes are formal then "semantic" just
means governed by a process we don't understand yet) giving and getting
through certain kinds of I/O doesn't make much difference. Two for
instances: SHRDLU operated on a simulated blocks world. The
modifications to make it operate on real block would have been
peripheral and not have effected the understanding of the system. Also,
all systems take analog input and give analog output. Most receive
finger pressure on keys and return directed streams of ink or electrons.
It may be that a robot would need more "immediate" (as opposed to
conventional) representations, but it's neither necessary nor sufficient
to be a robot to have those representations.
P.s. don't ask me to be the moderator for this new group. The turing
test always assumes the moderator has some claim to expertise in the
matter.
------------------------------
Date: 16 Oct 86 17:11:04 GMT
From: eugene@AMES-AURORA.ARPA (Eugene miya)
Subject: Re: Turing, Symbols, Categories
<2495@utai.UUCP> <2552@utai.UUCP>
In article <2552@utai.UUCP>, me@utai.UUCP (Daniel Simon) writes:
> In article <167@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
> >
> >The turing test has two components, (i) a formal, empirical one,
> >and (ii) an informal, intuitive one. The formal empirical component (i)
> >is the requirement that the system being tested be able to generate human
> >performance (be it robotic or linguistic). That's the nontrivial
> >burden that will occupy theorists for at least decades to come, as we
> >converge on (what I've called) the "total" turing test -- a model that
> >exhibits all of our robotic and lingistic capacities.
>
> Moreover, you haven't said anything concrete about what this test might look
> like. On what foundation could such a set of defining characteristics for
> "human performance" be based? Would it define those attributes common to all
> human beings? Most human beings? At least one human being? How would we
> decide by what criteria to include observable attributes in our set of "human"
> ones? How could such attributes be described? Is such a set of descriptions
> even feasible? If not, doesn't it call into question the validity of seeking
> to model what cannot be objectively characterized? And if such a set of
> describable attributes is feasible, isn't it an indispensable prerequisite for
> the building of a working Turing-test-passing model?
>
> Again, you have all but admitted that the "total" Turing test you have
> described has nothing to do with the Turing test at all--it is a set of
> "objective observables" which can be verified through scientific examination.
> The thoughtful examiner and "comparison human" have been replaced with
> controlled scientific experiments and quantifiable results. What kinds of
> experiments? What kinds of results? WHAT DOES THE "TOTAL TURING TEST"
> LOOK LIKE?
>
> I know, I know. I ask a lot of questions. Call me nosy.
>
> Daniel R. Simon
Keep asking questions.
1) I deleted your final comment about database: note EXPERT SYSTEMS
(so called KNOWLEDGE-BASED SYSTEMS) ARE NOT AI.
2) I've been giving thought to what a `true' Turing test would be
like. I found Turing's original paper in Mind. This is what I have
concluded with light thinking for about 8 months:
a) No single question can answer the question of intelligence, then
how many? I hope a finite, preferably small, or at least a countable number.
b) The Turing test is what psychologists call a test of `Discrimination.'
These tests should be carefully thought out for pre-test and post-test
experimental conditions (like answers of a current question may or may not
be based on answers from an earlier [not necessarily immediate question]).
c) Some of the questions will be confusing, sort of like the more sophisticated
eye tests like I just had. Note we introduce the possibly of calling
some human "machines."
d) Early questions in the tests in particular those of quantitative reasoning
should be timed as well as checked for accuracy. Turing would want this.
to was in his original paper.
e) The test must be prepared for ignorance on the part of humans and machines.
It should not simply take "I don't know," or "Not my taste" for
answers. It should be able to circle in on one's ignorance to
define the boundaries or character of the respondent's ignorance.
f) Turing would want a degree of humor. The humor would be a more
sophisticated type like punning or double entandres. Turing would
certainly consider gaming problems.
Turing mentions all these in his paper. Note that some of the original
qualities make AI uneconomical in the short term. Who wants a computer
which makes adding errors? Especially it's it dealing with my pay check.
I add that
a) We should check for `personal values,' `compassion,' which might
traits or artifacts of the person or team responsible for programming.
It should exploit those areas as possible lines of weakness or strenth.
b) The test should have a degree of dynamic problem solving.
c) The test might have characteristics like that test in the film Blade
Runner. Note: memories != intelligence, but the question might be
posed to the respondent in such a way: "Your wife and your daughter
have fallen into the water. You can only save one. Who do you save?
and why?"
d) Consider looking at the WAIS, WISC, the Stanford-Binet, the MMPI
(currently being updated), the Peabody, and numerous other tests of
intelligence and personality, etc. Note there are tests which
distinguish split brain people. They are simple tests. Consider
the color-blindness tests: simple if you are not color blind,
confusing is you are. There is a whole body of psychometric
literature which Turing did not consult.
As you can guess, such a test cannot be easily placed as a sequence on
paper, but as a program in a dumb machine, it is certainly possible.
As a last thought. The paper in Mind was published in 1950. Turing
made comment about "computers with the capacity of a billion [what
he did not say]," and the "turn of the Century." I suggested to Doug
Hofstadter (visiting here one day), we hold a 50th anniversary celebration
in the year 2000 on the publication of Turing paper, and he agreed.
From the Rock of Ages Home for Retired Hackers:
--eugene miya
NASA Ames Research Center
eugene@ames-aurora.ARPA
"You trust the `reply' command with all those different mailers out there?"
{hplabs,hao,nike,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene
I need a turing machine to route my mail.
------------------------------
End of AIList Digest
********************
∂19-Oct-86 0624 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #228
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 19 Oct 86 06:23:57 PDT
Date: Sat 18 Oct 1986 21:21-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #228
To: AIList@SRI-STRIPE
AIList Digest Sunday, 19 Oct 1986 Volume 4 : Issue 228
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 16 Oct 86 17:25:42 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
In reply to the following by me in <167@mind.UUCP>:
> there is no evidence at all that
> either capacities or contexts are modular.
michaelm@bcsaic.UUCP (michael maxwell) writes:
>> Maybe I'm reading this out of context (not having read your books or papers),
>> but could you explain this statement? I know of lots of evidence for the
>> modularity of various aspects of linguistic behavior. In fact, we have a
>> parser + grammar of English here that captures a large portion of English
>> syntax, but has absolutely no semantics (yet).
I'm afraid this extract is indeed a bit out of context. The original
context concerned what I've dubbed the "Total Turing Test," one
in which ALL of our performance capacities -- robotic and linguistic --
are "captured." In the papers under discussion I described several
arguments in favor of the Total Turing Test over any partial
turing test, such as "toy" models that only simulate a small
chunk of our cognitive performance capacity, or even the (subtotal)
linguistic ("teleteype") version of the Total Turing Test. These
arguments included:
(3) The "Convergence Argument" that `toy' problems are arbitrary,
that they have too many degrees of freedom, that the d.f. shrink as the
capacities of the toy grow to life-size, and that the only version that
reduces the underdetermination to the normal proportions of a
scientific theory is the `Total' one.
(5) The "Nonmodularity Argument" that no subtotal model constitutes a
natural module (insofar as the turing test is concerned); the only
natural autonomous modules are other organisms, with their complete
robotic capacities (more of this below).
(7) The "Robotic Functionalist Argument" that the entire symbolic
functional level is no macromodule either, and needs to be grounded
in robotic function.
I happen to have views on the "autonomy of syntax" (which is of
course the grand-daddy of the current modulo-mania), but they're not
really pertinent to the total vs modular turing-test issue. Perhaps
the only point about an autonomous parser that is relevant here is
that it is in the nature of the informal, intuitive component of the
turing test that lifeless fragments of mimicry (such as Searle's isolated
`thirst' module) are not viable; they simply fail to convince us of
anything. And rightly so, I should think; otherwise the turing test
would be a pretty flimsy one.
Let me add, though, that even "convincing" autonomous parsing performance
(in the non-turing sense of convincing) seems to me to be rather weak
evidence for the psychological reality of a syntactic module -- let
alone that it has a mind. (On my theory, semantic performance has to be
grounded in robotic performance and syntactic performance must in turn
be grounded in semantic performance.)
Stevan Harnad
(princeton!mind!harnad)
------------------------------
Date: Thu 16 Oct 86 17:55:00-PDT
From: Pat Hayes <PHayes@SRI-KL.ARPA>
Subject: symbols
Stevan Harnad has answered Drew Lawson nicely, but I cant help adding this
thought: if he saw a symbol of a car coming and DIDNT get out of the way, would
the resulting change of his state be a purely symbolic one?
Pat Hayes
------------------------------
Date: 17 Oct 1986 1329-EDT
From: Bruce Krulwich <KRULWICH@C.CS.CMU.EDU>
Subject: symbols: syntax vs semantics
i think that the main thing i disagree with about Searle's work
and recent points in this discussion is the claim that symbols,
and in general any entity that a computer will process, can only
be dealt with in terms of syntax. i disagree. for example, when
i add two integers, the bits that the integers are encoded in are
interpreted semantically to combine to form an integer. the same
could be said about a symbol that i pass to a routine in an
object-oriented system such as CLU, where what is done with
the symbol depends on it's type (which i claim is it's semantics)
i think that the reason that computers are so far behind the
human brain in semantic interpretation and in general "thinking"
is that the brain contains a hell of a lot more information
than most computer systems, and also the brain makes associations
much faster, so an object (ie, a thought) is associated with
its semantics almost instantly.
bruce krulwich
arpa: krulwich@c.cs.cmu.edu
bitnet: bk0a%tc.cc.cmu.edu@cmuccvma
uucp: (??) ... uw-beaver!krulwich@c.cs.cmu.edu
or ... ucbvax!krulwich@c.cs.cmu.edu
"Life's too short to ponder garbage"
------------------------------
Date: Fri 17 Oct 86 10:04:51-PDT
From: Pat Hayes <PHayes@SRI-KL.ARPA>
Subject: turing test
Daniel R. Simon has worries about the Turing test. A good place to find
intelligent discussion of these issues is Turings original article in MIND,
October 1950, v.59, pages 433 to 460.
Pat Hayes
PHAYES@SRI-KL
------------------------------
Date: 14 Oct 86 21:20:53 GMT
From: adelie!axiom!linus!philabs!pwa-b!mmintl!franka@ll-xn.arpa (Frank Adams)
Subject: Re: Searle, Turing, Symbols, Categories
In article <166@mind.UUCP> harnad@mind.UUCP writes:
>What I mean by a symbol is an
>arbitrary formal token, physically instantiated in some way (e.g., as
>a mark on a piece of paper or the state of a 0/1 circuit in a
>machine) and manipulated according to certain formal rules. The
>critical thing is that the rules are syntactic, that is, the symbol is
>manipulated on the basis of its shape only -- which is arbitrary,
>apart from the role it plays in the formal conventions of the syntax
>in question. The symbol is not manipulated in virtue of its "meaning."
>Its meaning is simply an interpretation we attach to the formal
>goings-on. Nor is it manipulated in virtue of a relation of
>resemblance to whatever "objects" it may stand for in the outside
>world, or in virtue of any causal connection with them. Those
>relations are likewise mediated only by our interpretations.
I see two problems with respect to this viewpoint. One is that relating
purely symbolic functions to external events is essentially a solved
problem. Digital audio recording, for example, works quite well. Robotic
operations generally fail, when they do, not because of any problems with
the digital control of an analog process, but because the purely symbolic
portion of the process is inadequate. In other words, there is every reason
to expect that a computer program able to pass the Turing test could be
extended to one able to pass the robotic version of the Turing test,
requiring additional development effort which is tiny by comparison (though
likely still measured in man-years).
Secondly, even in a purely formal environment, there turn out to be a lot of
real things to talk about. Primitive concepts of time (before and after)
are understandable. One can talk about nouns and verbs, sentences and
conversations, self and other. I don't see any fundamental difference
between the ability to deal with symbols as real objects, and the ability to
deal with other kinds of real objects.
Frank Adams ihnp4!philabs!pwa-b!mmintl!franka
Multimate International 52 Oakland Ave North E. Hartford, CT 06108
------------------------------
Date: 17 Oct 86 19:35:51 GMT
From: adobe!greid@glacier.stanford.edu
Subject: Re: Searle, Turing, Symbols, Categories
It seems to me that the idea of concocting a universal Turing test is sort
of useless.
Consider, for a moment, monsters. There have been countless monsters on TV
and film that have had varying degrees of human-ness, and as we watch the
plot progress, we are sort of administering the Turing test. Some of the
better aliens, like in "Blade Runner", are very difficult to detect as being
non-human. However, given enough time, we will eventually notice that they
don't sleep, or that they drink motor oil, or that they don't bleed when
they are cut (think of "Terminator" and surgery for a minute), and we start
to think of alternative explanations for the aberrances we have noticed. If
we are watching TV, we figure it is a monster. If we are walking down the
street and we see somebody get their arm cut off and they don't bleed, we
think *we* are crazy (or we suspect "special effects" and start looking for
the movie camera), because there is no other plausible explanation.
There are even human beings whom we question when one of our subconscious
"tests" fails--like language barriers, brain damage, etc. If you think
about it, there are lots of human beings who would not pass the Turing test.
Let's forget about it.
Glenn Reid
Adobe Systems
Adobe claims no knowledge of anything in this message.
------------------------------
Date: 18 Oct 86 15:16:14 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
In response to some of the arguments in favor of the robotic over the
symbolic version of the turing test in (the summaries of) my articles
"Minds, Machines and Searle" and "Category Induction and Representation"
franka@mmintl.UUCP (Frank Adams) replies:
> [R]elating purely symbolic functions to external events is
> essentially a solved problem. Digital audio recording, for
> example, works quite well. Robotic operations generally fail,
> when they do, not because of any problems with the digital
> control of an analog process, but because the purely symbolic
> portion of the process is inadequate. In other words, there is
> every reason to expect that a computer program able to pass the
> [linguistic version of the] Turing test could be extended to one
> able to pass the robotic version...requiring additional development
> effort which is tiny by comparison (though likely still measured
> in man-years).
This argument has become quite familiar to me from delivering the oral
version of the papers under discussion. It is the "Triviality of
Transduction [A/D conversion, D/A conversion, Effectors] Argument" (TT
for short).
Among my replies to TT the central one is the principled
Antimodularity Argument: There are reasons to believe that the neat
partitioning of function into autonomous symbolic and nonsymbolic modules
may break down in the special case of mind modeling. These reasons
include my "Groundedness" Argument: that unless cognitive symbols are
grounded (psychophysically, bottom-up) in nonsymbolic processes they remain
meaningless. (This amounts to saying that we must be intrinsically
"dedicated" devices and that our A/D and our "decryption/encryptions"
are nontrivial; in passing, this is also a reply to Searle's worries
about "intrinsic" versus "derived" intentionality. It may also be the
real reason why "the purely symbolic portion of the process is inadequate"!)
This problem of grounding symbolic processes in nonsymbolic ones in the
special case of cognition is also the motivation for the material on category
representation.
Apart from nonmodularity and groundedness, other reasons include:
(1) Searle's argument itself, and the fact that only the transduction
argument can block it; that's some prima facie ground for believing
that the TT may be false in the special case of mind-modeling.
(2) The triviality of ordinary (nonbiological) transduction and its
capabilities, comparared to what organisms with senses (and minds) can
do. (Compare the I/O capacities of "audio" devices with those of
"auditory" ones; the nonmodular road to the capacity to pass the total
turing test suggests that we are talking here about qualitative
differences, not quantitative ones.)
(3) Induction (both ontogenetic and phylogentetic) and inductive capacity
play an intrinsic and nontrivial role in bio-transduction that they do
not play in ordinary engineering peripherals, or the kinds of I/O
problems these have been designed for.
(4) Related to the Simulation/Implementation Argument: There are always
more real-world contingencies than can be anticipated in a symbolic
description or simulation. That's why category representations are
approximate and the turing test is open-ended.
For all these reasons, I believe that Object/Symbol conversion in
cognition is a considerably more profound problem than ordinary A/D;
orders of magnitude more profound, in fact, and hence that TT is
false.
> [E]ven in a purely formal environment, there turn out to be a
> lot of real things to talk about. Primitive concepts of time
> (before and after) are understandable. One can talk about nouns
> and verbs, sentences and conversations, self and other. I don't
> see any fundamental difference between the ability to deal with
> symbols as real objects, and the ability to deal with other kinds
> of real objects.
I don't completely understand the assumptions being made here. (What
is a "purely formal environment"? Does anyone you know live in one?)
Filling in with some educated guesses here, I would say that again the
Object/Symbol conversion problem in the special case of organisms'
mental capacities is being vastly underestimated. Object-manipulation
(including discrimination, categorization, identification and
description) is not a mere special case of symbol-manipulation or
vice-versa. One must be grounded in the other in a principled way, and
the principles are not yet known.
On another interpretation, perhaps you are talking about "deixis" --
the necessity, even in the linguistic (symbolic) version of the turing
test, to be able to refer to real objects in the here-and-now. I agree that
this is a deep problem, and conjecture that its solution in the
symbolic version will have to draw on anterior nonsymbolic (i.e.,
robotic) capacities.
Stevan Harnad
princeton!mind!harnad
------------------------------
End of AIList Digest
********************
∂23-Oct-86 0121 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #229
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 23 Oct 86 01:21:45 PDT
Date: Wed 22 Oct 1986 22:33-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #229
To: AIList@SRI-STRIPE
AIList Digest Thursday, 23 Oct 1986 Volume 4 : Issue 229
Today's Topics:
Seminars - Toward a Learning Robot (CMU) &
Implementing Scheme on a Personal Computer (SMU) &
Functional Representations in Knowledge Programming (UTexas) &
The Hypotheses Underlying Connectionism (UCB) &
Advances in Computational Robotics (CMU) &
More Agents are Better Than One (SU) &
Automatic Schematics Drafting (CMU) &
Learning by Failing to Explain (MIT)
----------------------------------------------------------------------
Date: 15 October 1986 1505-EDT
From: Elaine Atkinson@A.CS.CMU.EDU
Subject: Seminar - Toward a Learning Robot (CMU)
SPEAKER: Tom Mitchell, CMU, CS Department
TITLE: "Toward a Learning Robot"
DATE: Thursday, October 16
TIME: 4:00 p.m.
PLACE: Adamson Wing, Baker Hall
ABSTRACT: Consider the problem of constructing a learning robot; that
is, a system that interfaces to some environment via a set of sensors
and effectors, and which builds up a theory of its environment in order
to control the environment in accordance with its goals. One
instantiation of this problem is to construct a hand-eye system that
can learn to manipulate a collection of blocks and to build simple
structures from these blocks.
We are starting a new research project to develop such a learning
robot, and this talk will present some preliminary ideas about how
to proceed. The talk will consider a number of questions such as
what general cognitive architecture seems reasonable? What kinds
of knowledge must such a robot learn? How should this knowledge
be represented? How will it learn? How can the robot solve
problems with only an incomplete understanding of its world? Can
it use sensory feedback to make up for ambiguity in its world
theory? There will probably be more questions than answers,
so please bring your own.
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Seminar - Implementing Scheme on a Personal Computer (SMU)
Implementing Scheme on a Personal Computer
Speaker: David Bartley Location: 315 SIC
Texas Instruments Time: 2:00 PM
PC Scheme is an implementation of the Scheme language, a lexically scoped,
applicative order, and properly tail-recursive dialect of LISP. PC Scheme was
implemented for IBM and TI personal computers within the Symbolic Computing
Laboratory at Texas Instruments. The presentation will examine some of the
pragmatic aspects of developing a production-quality LISP system for small
machines. These include: compilation vs. interpretation, using a byte-threaded
virtual machine for compact code, the architecture of the virtual machine,
runtime representation issues, compiler design, debugging issues, and
performance. Some significant differences between LISP and conventional
language implementations will be highlighted.
------------------------------
Date: Fri 17 Oct 86 12:56:46-CDT
From: Ellie Huck <AI.ELLIE@MCC.COM>
Subject: Seminar - Functional Representations in Knowledge Programming (UTexas)
Please join the AI Group for the following talk October 22 at 11:00am
in the Balcones 4th Floor Conference Room 4.302:
KNOWLEDGE PROGRAMMING USING FUNCTIONAL REPRESENTATIONS
Peter E. Hart
Syntelligence
SYNTEL is a novel knowledge representation language that provides
traditional features of expert system shells within a pure functional
programming paradigm. However, it differs sharply from existing
functional languages in many ways, ranging from its ability to deal
with uncertainty to its evaluation procedures. A very flexible
user-interface facility, tightly integrated with the SYNTEL
interpreter, gives the knowledge engineer full control over both form
and content of the end-user system. SYNTEL executes in both LISP
machine and IBM mainframe/workstation environments, and has been used
to develop large knowledge bases dealing with the assessment of
financial risks. This talk will present an overview of its
architecture, as well as describe the real-world problems that
motivated its development.
October 22, 1986
11:00am
Balcones Room 4.302
------------------------------
Date: Mon, 20 Oct 86 10:30:58 PDT
From: admin%cogsci.Berkeley.EDU@berkeley.edu (Cognitive Science Program)
Subject: Seminar - The Hypotheses Underlying Connectionism (UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237A
Tuesday, October 28, 11:00 - 12:30*
2515 Tolman Hall
Discussion: 12:30 - 1:30
2515 Tolman Hall
``The Hypotheses Underlying Connectionism''
Paul Smolensky
Department of Computer Science & Institute of Cognitive Science
University of Colorado at Boulder
Cognitive models using massively parallel, nonsymbolic computation
have now been developed for a considerable variety of cognitive
processes. What are the essential hypotheses underlying these
connectionist models? A satisfactory formulation of these hy-
potheses must handle a number of attacks:
-Nothing really new can be offered since Turing machines are universal
-Connectionism just offers implementation details
-Conscious, rule-guided behavior is ignored
-The wrong kind of explanations are given for behavior
-The models are too neurally unfaithful
-Logic, rationality, and the structure of mental states are ignored
-Useful AI concepts like frames and productions are ignored.
Firstly, an introduction to connectionist models which describes
the kind of computation they use will be presented and secondly,
a general connectionist approach that faces the challenges listed
above will be introduced.
------------------------------
Date: 20 October 1986 1309-EDT
From: Richard Wallstein@A.CS.CMU.EDU
Subject: Seminar - Advances in Computational Robotics (CMU)
Robotics Seminar, FRIDAY Oct. 24, 2 PM, 4623 WeH
John H. Reif
Computer Science Department
Duke University
ADVANCES IN THE THEORY OF COMPUTATIONAL ROBOTICS
This talk surveys work on the computational complexity of various movement
planning problems relevant to robotics. The generalized mover's problem is to
plan a sequence of movements of linked polyhedra through 3-dimensional
Euclidean space, avoiding contact with a fixed set of polyhedra obstacles.
We discuss algorithms for solving restricted mover's problems and our proof
that generalized mover's problems are polynominal space hard.
We also discuss our results on the computational complexity (both algorithms
and lower bounds) of three other quite different types of movement problems;
1. movement planning in the presence of friction;
2. minimal movement planning;
3. dynamic movement planning with moving obstacles.
------------------------------
Date: 20 Oct 86 1141 PDT
From: Vladimir Lifschitz <VAL@SAIL.STANFORD.EDU>
Subject: Seminar - More Agents are Better Than One (SU)
MORE AGENTS ARE BETTER THAN ONE
Michael Georgeff
Artificial Intelligence Center
SRI International
Thursday, October 23, 4pm
MJH 252
A recent paper by Steve Hanks and Drew Mcdermott shows how some
previous "solutions" to the frame problem turn out to be inadequate,
despite appearances otherwise. They use a simple example -- come to
be called the "Yale Shooting Problem" -- for which it is impossible to
derive some expected results -- in this case, that the target of a
shooting event ceases living. Such difficulties, they suggest, call
into question the utility of nonmonotonic logics for solving the frame
problem.
In this talk, we describe a theory of action suited to multiagent
domains, and show how this formulation avoids the problems raised by
Hanks and McDermott. In particular, we show how the Yale Shooting
Problem can be solved using a generalized form of the situation
calculus for multiagent domains, together with notions of causality
and independence. The solution does not rely on complex
generalizations of nonmonotonic logics or circumscription, but instead
uses traditional circumscription. We will also argue that most
problems traditionally viewed as involving a single agent are better
formulated as multiagent problems, and that the frame problem, as
usually posed, is not what we should be attempting to solve.
------------------------------
Date: 20 Oct 86 15:23:35 EDT
From: Steven.Minton@k.cs.cmu.edu
Subject: Seminar - Automatic Schematics Drafting (CMU)
This week's seminar is being led by Raul Valdes-Perez. Friday, 3:15
in 7220. Be there. Here's the abstract:
Title: "Automatic Schematics Drafting: Aesthetic Configuration
as a Design Task".
To draft a schematic means to depict (say on paper) the
electrical connections and function of a circuit.
Aspects of this work are the following:
1. A design task that uses other than a production-system architecture.
2. An approach to "space planning" that is modern in the sense of exploiting
dependency-directed backtracking and constraint-posting.
3. The idea of contradicton-fixing rules that exploit the richness of
information when an inconsistency occurs.
4. Study of a linear-inequality-based representation of partial task
solutions, and the properties of this representation.
5. A backtracking scheme suited to the search regimen used.
------------------------------
Date: Tue, 21 Oct 1986 12:05 EDT
From: JHC%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Seminar - Learning by Failing to Explain (MIT)
LEARNING BY FAILING TO EXPLAIN
Robert Joseph Hall
MIT Artificial Intelligence Laboratory
Explanation-based Generalization depends on having an explanation on
which to base generalization. Thus, a system with an incomplete or
intractable explanatory mechanism will not be able to generalize some
examples. It is not necessary, in those cases, to give up and resort
to purely empirical generalization methods, because the system may
already know almost everything it needs to explain the precedent.
Learning by Failing to Explain is a method which exploits current
knowledge to prune complex precedents and rules, isolating their
mysterious parts. This paper describes two techniques for Learning by
Failing to Explain: Precedent Analysis, partial analysis of a
precedent or rule to isolate the mysterious new technique(s) it
embodies; and Rule Re-analysis, re-analyzing old rules in terms of new
rules to obtain a more general set.
Thursday, October 23, 4pm
NE-43, 8th floor playroom
------------------------------
End of AIList Digest
********************
∂23-Oct-86 0423 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #230
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 23 Oct 86 04:22:53 PDT
Date: Wed 22 Oct 1986 22:39-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #230
To: AIList@SRI-STRIPE
AIList Digest Thursday, 23 Oct 1986 Volume 4 : Issue 230
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories &
Reflexes as a Test of Self
----------------------------------------------------------------------
Date: 19 Oct 86 02:30:24 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
greid@adobe.UUCP (Glenn Reid) writes:
> [C]oncocting a universal Turing test is sort of useless... There
> have been countless monsters on TV...[with] varying degrees of
> human-ness...Some...very difficult to detect as being non-human.
> However, given enough time, we will eventually notice that they
> don't sleep, or that they drink motor oil...
The objective of the turing test is to judge whether the candidate
has a mind, not whether it is human or drinks motor oil. We must
accordingly consult our intuitions as to what differences are and are
not relevant to such a judgment. [Higher animals, for example, have no
trouble at all passing (the animal version) of the turing test as far
as I'm concerned. Why should aliens, monsters or robots, if they have what
it takes in the relevant respects? As I have argued before, turing-testing
for relevant likeness is really our only way of contending with the
"other-minds" problem.]
> [T]here are lots of human beings who would not pass the Turing
> test [because of brain damage, etc.].
And some of them may not have minds. But we give them the benefit of
the doubt for humanitarian reasons anyway.
Stevan Harnad
(princeton!mind!harnad)
------------------------------
Date: 19 Oct 86 14:59:49 GMT
From: clyde!watmath!watnot!watdragon!rggoebel@caip.rutgers.edu
(Randy Goebel LPAIG)
Subject: Re: Searle, Turing, Symbols, Categories
Stevan Harnad writes:
> ...The objective of the turing test is to judge whether the candidate
> has a mind, not whether it is human or drinks motor oil.
This stuff is getting silly. I doubt that it is possible to test whether
something has a mind, unless you provide a definition of what you believe
a mind is. Turing's test wasn't a test for whether or not some artificial
or natural entity had a mind. It was his prescription for an evaluation of
intelligence.
------------------------------
Date: 20 Oct 86 14:59:30 GMT
From: rutgers!princeton!mind!harnad@Zarathustra.Think.COM (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
rggoebel@watdragon.UUCP (Randy Goebel LPAIG) replies:
> I doubt that it is possible to test whether something has a mind,
> unless you provide a definition of what you believe a mind is.
> Turing's test wasn't a test for whether or not some artificial
> or natural entity had a mind. It was his prescription for an
> evaluation of intelligence.
And what do you think "having intelligence" is? Turing's criterion
effectively made it: having performance capacity that is indistinguishable
from human performance capacity. And that's all "having a mind"
amounts to (by this objective criterion). There's no "definition" in
any of this, by the way. We'll have definitions AFTER we have the
functional answers about what sorts of devices can and cannot do what
sorts of things, and how and why. For the time being all you have is a
positive phenomenon -- having a mind, having intelligence -- and
an objective and intuitive criterion for inferring its presence in any
other case than one's own. (In your own case you presumable know what
it's like to have-a-mind/have-intelligence on subjective grounds.)
Stevan Harnad
princeton!mind!harnad
------------------------------
Date: 21 Oct 86 20:53:49 GMT
From: uwslh!lishka@rsch.wisc.edu (a)
Subject: Re: Searle, Turing, Symbols, Categories
In article <5@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>rggoebel@watdragon.UUCP (Randy Goebel LPAIG) replies:
>
>> I doubt that it is possible to test whether something has a mind,
>> unless you provide a definition of what you believe a mind is.
>> Turing's test wasn't a test for whether or not some artificial
>> or natural entity had a mind. It was his prescription for an
>> evaluation of intelligence.
>
>And what do you think "having intelligence" is? Turing's criterion
>effectively made it: having performance capacity that is indistinguishable
>from human performance capacity. And that's all "having a mind"
>amounts to (by this objective criterion). There's no "definition" in
>any of this, by the way. We'll have definitions AFTER we have the
>functional answers about what sorts of devices can and cannot do what
>sorts of things, and how and why. For the time being all you have is a
>positive phenomenon -- having a mind, having intelligence -- and
>an objective and intuitive criterion for inferring its presence in any
>other case than one's own. (In your own case you presumable know what
>it's like to have-a-mind/have-intelligence on subjective grounds.)
>
>Stevan Harnad
How does one go about testing for something when one does not know
what that something is? My basic problem with all this are the two
keywords 'mind' and 'intelligence'. I don't think that what S. Harnad
is talking about when referring to 'mind' and 'intelligence' are what
I believe is the 'mind' and 'intelligence', and I presume others are having
this problem (see first article above).
I think a fair example is trying to 'test' for UFO's. How does one
do this if (a) we don't know what they are and (b) we don't really know if
they exist (is it the same thing with magnetic monpoles?). What are really
testing for in the case of UFO's? I think this answer is a little more
clear than for 'mind', because people generally seem to have an idea of
what a UFO is (an Unidentified Flying Object). Therefore, the minute we
come across something really strange that falls from the sky and can in
no way be identified we label it a UFO (and then try to explain it somehow).
However, until this happens (and whether this has already happened depends
on what you believe) we can't test specifically for UFO's [at least from
how I look at it].
How then does one test for 'mind' or 'intelligence'? These
definitions are even less clear. Ask a particular scientist what he thinks
is 'mind' and 'intelligence', and then ask another. Chances are that their
definitions will be different. Now ask a Christian and a Buddhist. These
answers will be even more different. However, I don't think any one will
be more valid than the other. Now, if one is to define 'mind' before
testing for it, then everyone will have a pretty good idea of what he was
testing for. But if one refuses to define it, there are going to be a
h*ll of a lot of arguments (as it seems there already have been in this
discussion). The same works for intelligence.
I honestly don't see how one can apply the Total Turing Test,
because the minute one finds a fault, the test has failed. In fact, even
if the person who created the 'robot' realizes somehow that his creation
is different, then for me the test fails. But this has all been discussed
before. However, trying to use 'intelligence' or having a 'mind' as one
of the criteria for this test when one expects to arrive at a useful
definition "along the way" seems to be sort of silly (from my point of
view).
I speak only for myself. I do think, though, that the above reasons
have contributed to what has become more a fight of basic beliefs than
anything else. I will also add my vote that this discussion move away from
'the Total Turing Test' and continue on to something a little less "talked
into the dirt".
Chris Lishka
Wisconsin State Lab of Hygiene
[qualifier: nothing above reflects the views of my employers,
although my pets may be in agreement with these views]
------------------------------
Date: 22 Oct 86 04:29:21 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
lishka@uwslh.UUCP (Chris Lishka) asks:
> How does one go about testing for something when one does not know
> what that something is? My basic problem with all this
> [discussion about the Total Turing Test] are the two
> keywords 'mind' and 'intelligence'. I don't think that what S. Harnad
> is talking about when referring to 'mind' and 'intelligence' are what
> I believe is the 'mind' and 'intelligence', and I presume others are
> having this problem...
You bet others are having this problem. It's called the "other minds"
problem: How can you know whether anyone/anything else but you has a mind?
> Now, if one is to define 'mind' before testing for it, then
> everyone will have a pretty good idea of what he was testing for.
What makes people think that the other-minds problem will be solved or
simplified by definitions? Do you need a definition to know whether
YOU have a mind or intelligence? Well then take the (undefined)
phenomenon that you know is true of you to be what you're trying to
ascertain about robots (and other people). What's at issue here is not the
"definition" of what that phenomenon is, but whether the Total Turing
Test is the appropriate criterion for inferring its presence in entities
other than yourself.
[I don't believe, by the way, that empirical science or even
mathematics proceeds "definition-first." First you test for the
presence and boundary conditions of a phenomenon (or, in mathematics,
you test whether a conjecture is true), then you construct and test
a causal explanation (or, in mathematics, you do a formal proof), THEN
you provide a definition, which usually depends heavily on the nature
of the explanatory theory (or proof) you've come up with.]
Stevan Harnad
princeton!mind!harnad
------------------------------
Date: 20 Oct 86 18:00:11 GMT
From: ubc-vision!ubc-cs!andrews@BEAVER.CS.WASHINGTON.EDU
Subject: Re: A pure conjecture on the nature of the self
In article <11786@glacier.ARPA> jbn@glacier.ARPA (John B. Nagle) writes:
>... The reflexes behind tickling
>seem to be connected to something that has a good way of deciding
>what is self and what isn't.
I would suspect it has more to do with "predictability" -- you
can predict, in some sense, where you feel tickling, therefore you
don't feel it in the same way. It's similar to the blinking "reflex"
to a looming object; if the looming object is someone else's hand
you blink, if it's your hand you don't.
The predictability may come from a sense of self, but I think
it's more likely to come from the fact that you're fully aware of
what is going to happen next when it's your own movements giving
the stimulus.
--Jamie.
...!seismo!ubc-vision!ubc-cs!andrews
"Now it's dark"
------------------------------
End of AIList Digest
********************
∂23-Oct-86 0713 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #231
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 23 Oct 86 07:12:56 PDT
Date: Wed 22 Oct 1986 22:45-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #231
To: AIList@SRI-STRIPE
AIList Digest Thursday, 23 Oct 1986 Volume 4 : Issue 231
Today's Topics:
Queries - Clinical Neuropsychological Assessment &
Robot Snooker-Player & HITECH Chess Machine & OOP in AI &
PROLOG on IBM MVS & Computing in Publishing &
Analog/Digital Distinction & Turing on Stage &
Criteria for Expert System Applications
----------------------------------------------------------------------
Date: 19 Oct 86 22:40:12 GMT
From: gknight@ngp.utexas.edu
Subject: Clinical neuropsychological assessment
I'm renewing an inquiry I made several weeks ago. I appreciate all
the responses I received -- and those of you who did reply don't have to do so
again, obviously.
But if there is anyone out there who didn't see or didn't respond to my
earlier posting . . .
I'm working on (1) a literature review of computer
aided or automated neuropsychological assessment
systems, and (2) development of an expert system for clinical
neuropsychological assessment. I would like to
hear from anyone who can give me references,
descriptions of work in progress, etc., concerning
either subject.
Many thanks,
--
Gary Knight, 3604 Pinnacle Road, Austin, TX 78746 (512/328-2480).
Biopsychology Program, Univ. of Texas at Austin. "There is nothing better
in life than to have a goal and be working toward it." -- Goethe.
------------------------------
Date: 20 Oct 86 09:13:41 EDT (Monday)
From: MJackson.Wbst@Xerox.COM
Subject: Robot Snooker-player
Over the weekend I caught part of a brief report on this on Cable News
Headlines. They showed a large robot arm making a number of impressive
shots, and indicated that the software did shot selection as well.
Apparently this work was done somewhere in Great Britain. Can someone
provide more detail?
Mark
------------------------------
Date: Mon 20 Oct 86 14:27:03-CDT
From: Larry Van Sickle <cs.vansickle@r20.utexas.edu.#Internet>
Reply-to: CS.VANSICKLE@R20.UTEXAS.EDU
Subject: Need reference on HITECH chess machine
Can anyone give me a reference that describes CMU's HITECH
chess machine/program in some detail? A search of
standard AI journals has failed to find one. Thanks,
Larry Van Sickle
cs.vansickle@r20.utexas.edu
Computer Sciences Department, U of Texas at Austin
------------------------------
Date: 20 Oct 86 11:23 PDT
From: Stern.pasa@Xerox.COM
Subject: Is there OOP in AI?
I just looked at the OOPSLA 86 (Object Oriented Programming Systems and
LAnguages) proceedings and found no mention of objects as used for AI.
Much surprised, I have since been told that the referees explicitly
excluded AI references, saying there are AI conferences for that sort of
thing. Going back to the AAAI 86 proceedings, there were no papers on
the use of OOP in AI.
Since then, I have found some references in F. Bancilhon's paper in
SIGMOD record 9/86 to some Japanese papers I need to lay hands on. Am I
missing any large body of current work here in the states on OOP and AI?
Josh
------------------------------
Date: Mon, 20 Oct 86 15:08:49 PLT
From: George Cross <FACCROSS%WSUVM1.BITNET@WISCVM.WISC.EDU>
Subject: PROLOG on IBM MVS
Hi,
I would appreciate knowing of any Prolog implementations on IBM mainframes
that run under MVS (*not* VM). Thanks.
---- George
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
George R. Cross cross@wsu.CSNET
Computer Science Department cross%wsu@csnet-relay.ARPA
Washington State University faccross@wsuvm1.BITNET
Pullman, WA 99164-1210 (509)-335-6319/6636
------------------------------
Date: 18 Oct 86 09:10:45 GMT
From: mcvax!ukc!its63b!epistemi!rda@seismo.css.gov (Robert Dale)
Subject: Info on Computing in Publishing Wanted
I'd be grateful for any leads on computing in publishing -- references to
the literature or products, primarily. I'm not, in the first instance,
interested in desktop publishing -- rather, I'm looking for stuff in book,
journal, magazine and newspaper publishing -- although pointers to any
up-to-date summary articles of what's going on in desktop publishing would
be useful. In particular, I'd be interested to hear of any AI-related
happenings in the publishing area.
I'll summarise any responses I get and repost. Thanks in advance.
--
Robert Dale University of Edinburgh, Centre for Cognitive Science,
2 Buccleuch Place, Edinburgh, EH8 9LW, Scotland.
UUCP: ...!ukc!cstvax!epistemi!rda
JANET: rda@uk.ac.ed.epistemi
------------------------------
Date: 21 Oct 86 13:33:35 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: The Analog/Digital Distinction: Soliciting Definitions
I'd like to test whether there is a coherent formulation of the
analog/digital distinction out there. I suspect that the results will
be surprising.
Engineers and computer scientists seem to feel that they have a
suitable working definition of the distinction, whereas philosophers
have argued that the distinction may not be tenable at all.
Cognitive scientists are especially interested because they are
concerned with analog vs. nonanalog representations. And
neuroscientists are interested in analog and nonanalog processes in
the nervous system.
I have some ideas, but I'll save them until I sample some of what the
Net nets. The ground-rules are these: Try to propose a clear and
objective definition of the analog/digital distinction that is not
arbitrary, relative, a matter of degree, or loses in the limit the
intuitive distinction it was intended to capture.
One prima facie non-starter: "continuous" vs. "discrete" physical
processes.
Stevan Harnad (princeton!mind!harnad)
------------------------------
Date: 22 Oct 86 12:47:26 PDT (Wednesday)
From: Hoffman.es@Xerox.COM
Subject: Turing on stage
Opening this week in one of London's West End theatres is the play,
"Breaking The Code" by Hugh Whitemore, starring Derek Jacobi as Alan
Turing. The play is based on Andrew Hodges' biography, 'Alan Turing:
The Enigma'. I don't know how much the play covers after the World War
II years. I'd be interested in any reviews. Send to me directly. If
there is interest, I'll summarize for AIList.
-- Rodney Hoffman <Hoffman.es@Xerox.com>
------------------------------
Date: Fri, 17 Oct 86 15:02 CDT
From: PADIN%FNALCDF.BITNET@WISCVM.WISC.EDU
Subject: AT FERMILAB--ES OR NOT, THAT IS THE QUESTION.
My interest in AI was peaked by a blurb on EXPERT SYSTEMS which I
read in the DEC PROFESSIONAL. I immediately saw the possible use of EXPERT
SYSTEMS in my work here at FERMILAB. However, in reading more about the
development of an ES, it appears to be a very long process and useful only
under certain circumstances as outlined by Waterman in his book 'A Guide to
Expert Systems'. He states
"Consider expert systems only if expert system development
is possible, justified, and appropriate."
By 'possible' he means
if [ (task does not require common sense) &
(task requires only cognitive skills) &
(experts can articulate their methods) &
(genuine experts exist) &
(experts agree on solutions) &
(task is not too difficult) &
(task is not poorly understood) ]
then
[ expert system development is POSSIBLE ]
By 'justified' he means
if [ (task solution has a high payoff) or
(human expertise being lost) or
(human expertise scarce) or
(expertise meeded in many locations) or
(expertise needed in hostile environment) ]
then
[ expert system development is JUSTIFIED ]
By 'appropriate' he means
if [ (task requires symbol manipulation) &
(task requires heuristic solutions) &
(task is not too easy) &
(task has proctical value) &
(task is of manageable size) ]
then
[ expert system approach is APPROPRIATE ]
As OPERATORS at FERMILAB we take the Protons that are extracted from
our MAIN RING and maneuver them to experimental targets. There are several
areas which I see the possible application of ES in our work.
1) troubleshooting help -- we are responsible for maintaining
a multitude of systems: water,cryogenic,computer,CAMAC,
electrical,safety interlock, and more. quick solutions
to problems save money, time, and maximize data flux to
experiments.
2) operator training -- we have both rapid turnover and a long
training season, i.e., it takes at least a year for an
operator to be trained. thus, we need a large knowledge
base and a sophisticated simulator/tutorial.
3) data aquisition -- we monitor large amounts of status data
and must have out-of-bounds alarms for many devices.
our alarm displays need to be centralized and smart so
that they display actual problems.
4) control system -- we control the path which the Protons
take by controlling the magnetic field strengths of
magnets though which the Protons travel. 'TUNING' a
BEAM LINE (targeting protons onto experimental
apparatus) is an art and as such is subject to the
frailty of human judgement. proper tuning is mandatory
because it increases data flux to experiments,
minimizes radiation intensities, and reduces equipment
damage.
? Are Waterman's criteria reasonable ones on which to make a decision
about pursueing ES application?
? I've read that the creation of an ES would take about 5 man-years,
does that sound right?
? If an ES is recomended, what would be the next step? Do I simply
call a representative of some AI company and invite them to make
a more informed assessment?
First I must convince myself that an ES is something that is really necessary
and useful. Next I must be able to convince my superiors. And finally, DOE
would need to be convinced!
thanks for any info
Clem <Padin@fnal.bitnet>
------------------------------
End of AIList Digest
********************
∂24-Oct-86 0205 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #232
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 24 Oct 86 02:03:50 PDT
Date: Thu 23 Oct 1986 21:37-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #232
To: AIList@SRI-STRIPE
AIList Digest Friday, 24 Oct 1986 Volume 4 : Issue 232
Today's Topics:
Queries - Neuron Chip & Neural Nets,
Learning - Neural Network Simulations & Cellular Automata,
Psychology - Self-Awareness,
Logic Programming - Bratko Review & Declarative Languages Bibliography
----------------------------------------------------------------------
Date: Tue, 21 Oct 86 20:11:17 pdt
From: Robert Bryant - Cross 8/87 <rbryant%wsu.csnet@CSNET-RELAY.ARPA>
Subject: Neuron Chip
INSIGHT magazine, Oct 13, 1986, page 62, had a brief article about a neuron
chip being tested by AT&T Bell Labs. "...registers, the electronic equivalent
of nerve cell synapses..." if any one has any more detailed information on
this please respond.
Rob Bryant
rbryant@wsu.csnet
[I believe Bell Labs was among the places putting one of Hopfield's
relaxation nets on a chip. They have also recently announced the
construction of an expert system on a chip (10,000 times as fast ...),
which I assume is a different project. -- KIL]
------------------------------
Date: Thu, 23 Oct 86 15:42:05 -0100
From: "Michael K. Jackman" <mkj%vax-d.rutherford.ac.uk@Cs.Ucl.AC.UK>
subject: Knowledge representation and Sowa's conceptual graphs
A number of us at Rutherford Appleton Laboratory (IKBS section)
have become interested in Sowa's approach to knowledge representation,
which is based on conceptual graphs. (see Clancey's review in AI 27,
1985, Fox, Nature 310, 1984). We believe it to be a particular powerful
and useful approach to KR and we are currently currently implementing
some of his ideas.
We would like to contact any other workers in this field
and exchanging ideas on Sowa's approach. Anyone interested should
contact me at Rutherford.
Michael K. Jackman
IKBS section - Rutherford Appleton Laboratory (0235-446619)
------------------------------
Date: 20 Oct 86 18:25:50 GMT
From: jam@bu-cs.bu.edu (Jonathan Marshall)
Subject: Re: simulating a neural network
In article <223@eneevax.UUCP> iarocci@eneevax.UUCP (Bill Dorsey) writes:
>
> Having recently read several interesting articles on the functioning of
>neurons within the brain, I thought it might be educational to write a program
>to simulate their functioning. Being somewhat of a newcomer to the field of
>artificial intelligence, my approach may be all wrong, but if it is, I'd
>certainly like to know how and why.
> The program simulates a network of 1000 neurons. Any more than 1000 slows
>the machine down excessively. Each neuron is connected to about 10 other
>neurons.
> .
> .
> .
> The initial results have been interesting, but indicate that more work
>needs to be done. The neuron network indeed shows continuous activity, with
>neurons changing state regularly (but not periodically). The robot (!) moves
>around the screen generally winding up in a corner somewhere where it occas-
>ionally wanders a short distance away before returning.
> I'm curious if anyone can think of a way for me to produce positive and
>negative feedback instead of just feedback. An analogy would be pleasure
>versus pain in humans. What I'd like to do is provide negative feedback
>when the robot hits a wall, and positive feedback when it doesn't. I'm
>hoping that the robot will eventually 'learn' to roam around the maze with-
>out hitting any of the walls (i.e. learn to use its senses).
> I'm sure there are more conventional ai programs which can accomplish this
>same task, but my purpose here is to try to successfully simulate a network
>of neurons and see if it can be applied to solve simple problems involving
>learning/intelligence. If anyone has any other ideas for which I may test
>it, I'd be happy to hear from you.
Here is a reposting of some references from several months ago.
* For beginners, I especially recommend the articles marked with an asterisk.
Stephen Grossberg has been publishing on neural networks for 20 years.
He pays special attention to designing adaptive neural networks that
are self-organizing and mathematically stable. Some good recent
references are:
(Category Learning):----------
* G.A. Carpenter and S. Grossberg, "A Massively Parallel Architecture for
a Self-Organizing Neural Patttern Recognition Machine." Computer
Vision, Graphics, and Image Processing. In Press.
G.A. Carpenter and S. Grossberg, "Neural Dynamics of Category Learning
and Recognition: Structural Invariants, Reinforcement, and Evoked
Potentials." In M.L. Commons, S.M. Kosslyn, and R.J. Herrnstein (Eds),
Pattern Recognition in Animals, People, and Machines. Hillsdale, NJ:
Erlbaum, 1986.
(Learning):-------------------
* S. Grossberg, "How Does a Brain Build a Cognitive Code?" Psychological
Review, 1980 (87), p.1-51.
* S. Grossberg, "Processing of Expected and Unexpected Events During
Conditioning and Attention." Psychological Review, 1982 (89), p.529-572.
S. Grossberg, Studies of Mind and Brain: Neural Principles of Learning,
Perception, Development, Cognition, and Motor Control. Boston:
Reidel Press, 1982.
S. Grossberg, "Adaptive Pattern Classification and Universal Recoding:
I. Parallel Development and Coding of Neural Feature Detectors."
Biological Cybernetics, 1976 (23), p.121-134.
S. Grossberg, The Adaptive Brain: I. Learning, Reinforcement, Motivation,
and Rhythm. Amsterdam: North Holland, 1986.
* M.A. Cohen and S. Grossberg, "Masking Fields: A Massively Parallel Neural
Architecture for Learning, Recognizing, and Predicting Multiple
Groupings of Patterned Data." Applied Optics, In press, 1986.
(Vision):---------------------
S. Grossberg, The Adaptive Brain: II. Vision, Speech, Language, and Motor
Control. Amsterdam: North Holland, 1986.
S. Grossberg and E. Mingolla, "Neural Dynamics of Perceptual Grouping:
Textures, Boundaries, and Emergent Segmentations." Perception &
Psychophysics, 1985 (38), p.141-171.
S. Grossberg and E. Mingolla, "Neural Dynamics of Form Perception:
Boundary Completion, Illusory Figures, and Neon Color Spreading."
Psychological Review, 1985 (92), 173-211.
(Motor Control):---------------
S. Grossberg and M. Kuperstein, Neural Dynamics of Adaptive Sensory-
Motor Control: Ballistic Eye Movements. Amsterdam: North-Holland, 1985.
If anyone's interested, I can supply more references.
--Jonathan Marshall
harvard!bu-cs!jam
------------------------------
Date: 21 Oct 86 17:22:54 GMT
From: arizona!megaron!wendt@ucbvax.Berkeley.EDU
Subject: Re: simulating a neural network
Anyone interested in neural modelling should know about the Parallel
Distributed Processing pair of books from MIT Press. They're
expensive (around $60 for the pair) but very good and quite recent.
A quote:
Relaxation is the dominant mode of computation. Although there
is no specific piece of neuroscience which compels the view that
brain-style computation involves relaxation, all of the features
we have just discussed have led us to believe that the primary
mode of computation in the brain is best understood as a kind of
relaxation system in which the computation proceeds by iteratively
seeking to satisfy a large number of weak constraints. Thus,
rather than playing the role of wires in an electric circuit, we
see the connections as representing constraints on the co-occurrence
of pairs of units. The system should be thought of more as "settling
into a solution" than "calculating a solution". Again, this is an
important perspective change which comes out of an interaction of
our understanding of how the brain must work and what kinds of processes
seem to be required to account for desired behavior.
(Rumelhart & Mcclelland, Chapter 4)
Alan Wendt
U of Arizona
------------------------------
Date: 22 Oct 86 13:58:12 GMT
From: uwmcsd1!uwmeecs!litow@unix.macc.wisc.edu (Dr. B. Litow)
Subject: cellular automata
Ed. Stephen Wolfram Contains many papers by Wolfram.
Available from Taylor&Francis,Intl. Publications Service,242 Cherry St.,
Philadelphia 19106-1906
*** REPLACE THIS LINE WITH YOUR MESSAGE ***
------------------------------
Date: 19 Oct 86 23:10:13 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Subject: A pure conjecture on the nature of the self
Conjecture: the "sense of identity" comes from the same
mechanism that makes tickling yourself ineffective.
This is not a frivolous comment. The reflexes behind tickling
seem to be connected to something that has a good way of deciding
what is self and what isn't. There are repeatable phenomena here that
can be experimented with. This may be a point of entry for work on some
fundamental questions.
John Nagle
[I apologize for having sent out a reply to this message before
putting this one in the digest. -- KIL]
------------------------------
Date: 21 Oct 86 18:19:52 GMT
From: cybvax0!frog!tdh@eddie.mit.edu (T. Dave Hudson)
Subject: Re: A pure conjecture on the nature of the self
> Conjecture: the "sense of identity" comes from the same
> mechanism that makes tickling yourself ineffective.
Suppose that tickling yourself may be ineffective because of your
mental focus. Are you primarily focusing on the sensations in the
hand that is doing the tickling, not focusing, focusing on the idea
that it will of course be ineffective, or focusing on the sensations
created at the tickled site?
One of my major impediments to learning athletics was that I had no
understanding of what it meant when those rare competent teachers told
me to feel the prescribed motion. It requires an act of focusing on
the sensations in the different parts of your body as you move. Until
you become aware of the sensations, you can't do anything with them.
(Once you're aware of them, you have to learn how to deal with a
multitude of them, but that's a different issue.)
Try two experiments.
1) Walk forward, and concentrate on how your back feels. Stop, then
place your hand so that the palm and fingertips cover your lower
back at the near side of the spine. Now walk forward again.
Notice anything new?
2) Run one hand's index fingertip very lightly over the back of the
other hand, so lightly that you can barely feel anything on the back
of the other hand, so lightly that maybe you're just touching the
hairs on that hand and not the skin. Close your eyes and try to
sense where on the back of that hand the fingertip is as it moves.
Now do you feel a tickling sensation?
David Hudson
------------------------------
Date: 16 Oct 86 07:48:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Reviews
[Forwarded from the Prolog Digest by Laws@SRI-STRIPE.]
I'm in the middle of reading the Bratko book, and I would give
it a very high rating. The concepts are explained very clearly,
there are lots of good examples, and the applications covered
are of high interest. Part I (chapters 1-8) is about Prolog
per se. Part II (chapters 9-16) shows how to implement many
standard AI techniques:
chap. 9 - Operations on Data Structures
chap. 10 - Advanced Tree Representations
chap. 11 - Basic Problem-solving Strategies
chap. 12 - Best-first: a heuristic search principle
chap. 13 - Problem reduction and AND/OR graphs
chap. 14 - Expert Systems
chap. 15 - Game Playing
chap. 16 - Pattern-directed Programming
Part I has 188 pages, part II has 214.
You didn't mention Programming in Prolog by Clocksin & Mellish -
this is also very good, and covers some things that Bratko
doesn't (it's more concerned with non-AI applications), but all
in all, I slightly prefer Bratko's book.
-- John Cugini
------------------------------
Date: Mon, 6 Oct 86 15:47:15 MDT
From: Lauren Smith <ls%lambda@LANL.ARPA>
Subject: Bibliography on its way
[Forwarded from the Prolog Digest by Laws@SRI-STRIPE.]
I have just sent out the latest update of the Declarative
Languages bibliography. Please notify the appropriate
people at your site - especially if there were several
requests from your site, and you became the de facto
distributor. Again, the bibliography is 24 files.
This is the index for the files, so you can verify that you
received everything.
ABDA76a-AZAR85a BACK74a-BYTE85a CAMP84a-CURR72a DA83a-DYBJ83b
EGAN79a-EXET86a FAGE83a-FUTO85a GABB84a-GUZM81a HALI84a-HWAN84a
ICOT84a-IYEN84a JACOB86a-JULI82a KAHN77a-KUSA84b LAHT80a-LPG86a
MACQ84a-MYCR84a NAGAI84a-NUTE85a OHSU85a-OZKA85a PAPAD86a-PYKA85a
QUI60 RADE84a-RYDE85a SAIN84a-SZER82b TAGU84a-TURN85b
UCHI82a-UNGA84 VALI85-VUIL74a WADA86a-WORL85a YAGH83a-YU84a
There has been a lot of interest regarding the formatting of
the bibliography for various types of word processing systems.
The biblio is maintained (in the UK) in a raw format, hence that
is the way that I am distributing it. Since everyone uses
different systems, it seems easiest to collect a group of macros
that convert RAW FORMAT ===> FAVORITE BIBLIO FORMAT and distribute
them. So, if you have a macro that does the conversion please
advertise it on the net or better yet, let me know so I can let
everyone else know about it.
If you have any additions to make, please send them to:
-- Andy Cheese at
abc%computer-science.nottingham.ac.uk@cs.ucl.ac.uk
or Lauren Smith at ls@lanl.arpa
Thank you for your interest.
-- Lauren Smith
[ I will be including one file per issue of the Digest until
all twenty four files are distributed starting with the next
issue. -ed ]
[AIList will not be carrying this bibliography. -- KIL]
------------------------------
End of AIList Digest
********************
∂24-Oct-86 0652 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #233
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 24 Oct 86 06:52:43 PDT
Date: Thu 23 Oct 1986 22:05-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #233
To: AIList@SRI-STRIPE
AIList Digest Friday, 24 Oct 1986 Volume 4 : Issue 233
Today's Topics:
Bibliography - ai.bib41C
----------------------------------------------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: defs for ai.bib41C
D MAG88 Robotersysteme\
%V 2\
%N 3\
%D 1986
D MAG89 Journal of Robotic Systems\
%V 3\
%N 3\
%D Autumn 1986
D MAG90 Pattern Recognition\
%V 19\
%N 5\
%D 1986
D MAG91 International Journal of Production Research\
%V 24\
%N 5\
%D SEP-OCT 1986
D MAG92 Fuzzy Sets and Systems\
%V 18\
%N 3\
%D APR 1986
D BOOK56 Advances in Automation and Robotics\
%V 1\
%I JAI Press\
%D 1985\
%C Greenwich, Connecticut
D MAG93 COMPINT 85\
%D 1985
D MAG94 The Second Conference on Artificial Intelligence Applications\
%D 1985
------------------------------
Date: WED, 20 apr 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: ai.bib41C
%A D. Partridge
%T Artificial Intelligence Applications in the Future of Software Engineering
%I John Wiley and Sons
%C New York
%D 1986
%K AT15 AA08 AI01
%X ISBN 0-20315-3 $34.95 241 pages
%A Richard Forsyth
%A Roy Rada
%T Machine Learning Applications in Expert Systems and Information Retrieval
%I John Wiley and Sons
%C New York
%D 1986
%K AT15 AA15 AI01 AI04
%X ISBN 0-20309-9 Cloth $49.95 , ISBN 0-20318-18 $24.95 paper 277 pages
%A W. John Hutchins
%T Machine Translation Past, Present and Future
%I John Wiley and Sons
%C New York
%D 1986
%K AT15 AI02
%X 380 pages 0-2031307 1986 $49.95
%A Karamjit S. Gill
%T Artificial Intelligence for Society
%I John Wiley and Sons
%C New York
%D 1986
%K O05 AT15
%X 280 pages 1-90930-0 1986 $34.95
%A Donald Michie
%T On Machine Intelligence
%I John Wiley and Sons
%C New York
%D 1986
%K AA17 AI07 AI08 AI01 AT15
%X 260 pages 0-20335-8 1986 $29.95
%A Chris Naylor
%T Building Your Own Expert System
%I John Wiley and Sons
%C New York
%D 1985
%K AI01 AT15
%X 249 pages 0-20172-X 1985 $15.95 paper
%A Peter Bishop
%T Fifth Generation Computers
%I John Wiley and Sons
%C New York
%D 1986
%K AT15 GA01 GA02 GA03
%X 166 pages 0-20269-6 1986 $29.95
%A Jerry M. Rosenberg
%T Dictionary of Artificial Intelligence and Robotics
%I John Wiley and Sons
%C New York
%D 1986
%K AT15 AI16 AI07
%X 225 pages 1-08982-0 $24.95 cloth; 1-84981-2 $14.95 paper
%A Peter S. Sell
%T Expert Systems
%I John Wiley and Sons
%C New York
%D 1985
%K AT15 AI01
%X 99 pages 0-20200-9 $14.95 paper
%A G. L. Simons
%T Expert Systems and Micros
%I John Wiley and Sons
%C New York
%D 1986
%K AT15 H01 AI01
%X 247 pages 0-20277-7 $19.95 paper
%A G. L. Simons
%T Is Man a Robot?
%I John Wiley and Sons
%C New York
%D 1986
%K AT15 AI16 AI08
%X 200 pages 1-91106-2 $18.95 paper
%A G. L. Simons
%T Introducing Artificial Intelligence
%I John Wiley and Sons
%C New York
%D 1985
%K AT08 AT15 AI16
%X 281 pages 0-20166-5 $19.95 paper "completely non-technical"
%A Yoshiaki Shirai
%A Jun-Ichi Tsujii
%T Artificial Intelligence Concepts, Techniques and Applications
%I John Wiley and Sons
%C New York
%D 1985
%K AT15 GA01 AI16
%X 177 pages 1-90581-X $19.95 "Drawn from the Fifth Generation Computer
Program"
%A Luc Steels
%A John A. Campbell
%T Progress in Artificial Intelligence
%I John Wiley and Sons
%C New York
%D 1985
%K AT15 AI16 GA03
%X "Drawn Euopean Conference on AI"
%A Tohru Moto-Oka
%A Masaru Kitsuregawa
%T The Fith Generation Computer The Japanese Challenge
%I John Wiley and Sons
%C New York
%D 1985
%K GA01 AT15
%X 122 pages 1-90739-1 1985 $17.95 paper
%A Leonard Uhr
%T Parallel Multicomputers and Artificial Intelligence
%I John Wiley and Sons
%C New York
%D 1986
%K AT15 H03
%X 150 pages 1-84979-0 $32.95
%A J. E. Hayes
%A Donald Michie
%T Intelligent Systems The Unprecedented Opportunity
%I John Wiley and Sons
%C New York
%D 1984
%K AT15 AI07 AA10 AA07
%X 206 pages 0-20139-8 1984 $19.95 paper
%A M. Yazdani
%A N. Narayanan
%T Artificial Intelligence: Human Effects
%I John Wiley and Sons
%C New York
%D 1985
%K AT15 O05 AA07 AA01
%X 318 pages 0-20239-4 1985 $27.95
%A Richard Ennals
%T Artificial Intelligence: Approaches to Logical Reasoning and Historical
Research
%I John Wiley and Sons
%C New York
%D 1985
%K AT15 AA11 AA25 AA07 T02
%X 172 pages 0-20181-9 1985 $29.95
%A S. Torrance
%T The Mind and The Machine
%I John Wiley and Sons
%C New York
%D 1984
%K AT15 AI16 AA08
%X 213 pages 0-20104-5 1984 $31.95
%A Stuart C. Shapiro
%T Encyclopedia of Artificial Intelligence
%I John Wiley and Sons
%C New York
%D 1987
%K AT15 AI16
%X 1500 pages 8.5" by 11" in two volumes 1-80748-7 due out May 1, 1987
$149.95 until Sep 1, 1987 and $175.00 thereafter
%A Stephen H. Kaisler
%T Interlisp The Language and its Usage
%I John Wiley and Sons
%C New York
%D 1986
%K T01 AT15
%X 1,144 pages 1-81644-2 1986 $49.95
%A Christian Queinnec
%T Lisp
%I John Wiley and Sons
%C New York
%D 1985
%K T01 AT15
%X 156 pages 0-20226-2 1985 $15.95 paper (translated by Tracy Ann Lewis)
%A J. A. Campbell
%T Implementations of Prolog
%I John Wiley and Sons
%C New York
%D 1984
%K T02 T01 AT15
%X 391 pages 0-20045-6 1984 $32.95 paper
%A W. D. Burnham
%A A. R. Hall
%T Prolog Programming and Applications
%I John Wiley and Sons
%C New York
%D 1985
%K T02 AT15
%X 114 pages 0-20263-7 1985 $16.95 paper
%A Deyi Li
%T A Prolog Database System
%I John Wiley and Sons
%C New York
%D 1984
%K T02 AA09 AT15
%X 207 pages 1-90429-5 1984
%A Rosalind Barrett
%A Allan Ramsay
%A Aaron Sloman
%T Pop-11 A Practical Language for Artificial Intelligence
%I John Wiley and Sons
%C New York
%D 1985
%K AT15 AI01 AI05 AI06
%X 232 pages 0-20237-8 1985 $19.95
%A Hugh de Saram
%T Programming in Micro-Prolog
%I John Wiley and Sons
%C New York
%D 1985
%K AT15 T02
%X 166 pages 0-20218-1 1985 $21.95 paper
%A Brian Sawyer
%A Dennis Foster
%T Programming Expert Systems in Pascal
%I John Wiley and Sons
%C New York
%D 1986
%K AT15 AI01 H01
%X 200 pages 1-84267-2 1986 $19.95 paper
%A Brian Sawyer
%A Dennis Foster
%T Programming Expert Systems in Modula-2
%I John Wiley and Sons
%C New York
%D 1986
%K AT15 AI01
%X 224 pages 1-85036-5 1986 $24.95 paper
%A K. Sparck-Jones
%A Y. Wilks
%T Automatic Natural Language Parsing
%I John Wiley and Sons
%C New York
%D 1985
%K AT15 AI02
%X 208 pages 0-20165-7 1985 $24.95 paper
%A C. S. Millish
%T Compiler Interpretation of Natural Language Descriptions
%I John Wiley and Sons
%C New York
%D 1985
%K AT15 AI02
%X 182 pages 0-20219-x 1985 $24.95
%A M. Wallace
%T Communicating with Databases in Natural Language
%I John Wiley and Sons
%C New York
%D 1985
%K AT15 AA09 AI02
%X 170 pages 0-20105-3 1984 $31.95
%A Mike James
%T Classification Algorithms
%I John Wiley and Sons
%C New York
%D 1986
%K AT15 O06
%X 209 pages 1-84799-2 1986 $34.95
%A Satosi Watanabi
%T Pattern Recognition: Human and Mechanical
%I John Wiley and Sons
%C New York
%D 1985
%K AT15 AI06 AI08
%X 352 pages 1-80815-6 1985 $44.95
.br
"Shows that all the known pattern recognition algorithms can be derived
from the principle of minimum entropy."
%A Donald A. Norman
%A Stephen W. Draper
%T User Centered System Design
%I Lawrence Erlbaum Associates Inc.
%C Hillsdale, NJ
%K AI02
%D 1986
%X 1986 544 pages 0-89859-872-9 paper prepaid $19.95
%A Robert J. Baron
%T The Cerebral Computer
%I Lawrence Erlbaum Associates Inc.
%C Hillsdale, NJ
%K AT15 AI08
%T Portrait: DFG Special Research Topic "Artificial Intelligence"
%J Die Umschau
%V 86
%N 9
%D SEP 1986
%K AI16 AT08
%X German, Abstract in English and German
%A P. Freyberger
%A P. Kampmann
%A G. Schmidt
%T A Knowledged [sic] Based Navigation Method for Autonomous
Mobile Robots (german)
%J MAG88
%P 149-162
%K AI07 AA19
%X German
%A P. M. Frank
%A N. Becker
%T Robot Activation with A Directed, fixing and Object-Extracting Camera
for Data Reduction
%J MAG88
%P 188
%K AI07 AI06
%X German
%A William J. Palm
%A Ramiro Liscano
%T Integrated Design of an End Effector for a Visual Servoing Algorithm
%J MAG89
%P 221-236
%K AI07 AI06
%A K. Cheng
%A M. Idesawa
%T A Simplified Interpolation and Conversion Method of Contour Surface Model to
Mesh Model
%J MAG89
%P 249-258
%K AI07 AI06
%A Genichiro Kinoshita
%A Masanori Idesawa
%A Shigeo Naomi
%T Robotic Range Sensor with Project of Bright Ring Pattern
%J MAG89
%P 249-258
%K AI07 AI06
%A M. G. Thomason
%A E. Granum
%A R. E. Blake
%T Experiments in Dynamic Programming Inference of Markov Networks with Strings
Representing Speech Data
%J MAG90
%P 343-352
%K AI05
%A M. Juhola
%T A Syntactic Method for Analysis of Saccadic Eyeme Movements
%J MAG90
%P 353-360
%K AA10
%A H. D. Cheng
%A K. S. Fu
%T Algorithm Partition and Parallel Recognition of General Context-Free
Languages using Fixed-size VLSI Architecture
%J MAG90
%P 361-372
%K AI06 H03 O06
%A E. S. Baugher
%A A. Rosenfeld
%T Boundary Localication in an Image Pyramid
%J MAG90
%P 373-396
%K AI06 H03
%A E. A. Parrish
%A W. E. McDonald, Jr
%T An Adaptive Pattern Analysis System for Isolating EMI
%J MAG90
%P 397-406
%K AA04 AI06
%A E. Tanaka
%A T. Toyama
%A S. Kawai
%T High Speed Error Correction of Phoneme Sequences
%J MAG90
%P 407-412
%K AI05
%A K. Jajuga
%T Bayes Classification Rule for the General Discrete Case
%J MAG90
%P 413-416
%K O04
%A N. N. Abdelmalek
%T Noise Filtering in Digital Images and Approximation Theory
%J MAG90
%P 417
%K AI06
%A S. R. T. Kumara
%A S. Hoshi
%A R. L. Kashyap
%A C. L. Moodie
%A T. C. Chang
%T Expert Systems in [sic]
%J MAG91
%P 1107-1126
%K AI01
%A H. Lipkin
%A L. E. Torfason
%A J. Duffy
%T Efficient Motion Planning for a Planar Manipulator Based on Dexterity
and Workspace Geometry
%J MAG91
%P 1235
%K AI06 AI09
%A R. R. Yager
%T A Characterization of the Extension Principle
%J MAG92
%P 205-218
%K O04
%A J. F. Baldwin
%T Automated Fuzzy and Probabilistic Inference
%J MAG92
%P 219-236
%K O04 AI01
%A A. F. Blishun
%T Fuzzy Adaptive Learning Model of Decision-Making Process
%J MAG92
%P 273-282
%K O04 AI13 AI04
%A A. O. Esobgue
%T Optimal Clustering of Fuzzy Data via Fuzzy Dynamic
Programming
%J MAG92
%P 283-298
%K O04 O06
%A J. Kacprzyk
%T Towards 'Human-Consistent' Mulstistage Decision Making
and Control Models Using Fuzzy Sets and Fuzzy Logic
%J MAG92
%P 299-314
%K O04 AI08 AI13
%A B. R. Gaines
%A M. L. G. Shaw
%T Induction of Inference Rules for Expert Systems
%J MAG92
%P 315-328
%K AI04 AI01 O04
%A M. Sugeno
%A G. T. Kang
%T Fuzzy Modeling and Control of Multilayer Incinerator
%J MAG92
%P 329-346
%K O04 AA20
%A Peizhuang Wang
%A Xihu Liu
%A E. Sanchez
%T Set-valued Statistics and its Application to Earthquake Engineering
%J MAG92
%P 347
%K O04 AA05
%A J. L. Mundy
%T Robotic Vision
%B BOOK56
%P 141-208
%K AI06 AI07
%A R. Bajcsy
%T Shape from Touch
%B BOOK56
%P 209-258
%K AI06 AI07
%A T. M. Husband
%T Education and Training in Robotics
%I IFS Publications Ltd
%C Bedford
%D 1986
%K AI06 AT15 AT18
%X multiple articles, 315 pages $54.00 ISBN 0-948507-04-7
%A N. Y. Foo
%T Dewey Indexing of Prolog Traces
%J The Computer Journal
%V 29
%N 1
%D FEB 1986
%P 17-19
%K T02
%A M. E. Dauhe-Witherspoon
%A G. Muehllehner
%T An Iterative Image Space Reconstruction Algorithm Suitable for Volume ECT
%J IEEE Trans on Med. Imaging
%V 5
%N 2
%D JUN 1986
%P 61-66
%K AA01 AI06
%A B. Zavidovique
%A V. Serfaty-Dutron
%T Programming Facilities in Image Processing
%J MAG93
%P 804-806
%K AI06
%A J. R. Ward
%A B. Blesser
%T Methods for Using Interactive Hand-print Character Recognition for Computer I
nput
%J MAG93
%P 798-803
%K AI06
%A Y. Tian-Shun
%A T. Yong-Lin
%T The Conceptual Model for Chinese Language Understanding and its Man-Machine
Paraphrase
%J MAG93
%P 795-797
%K AI02
%A G. Sabah
%A A. Vilnat
%T A Question Answering System which Tries to Respect Conversational Rules
%J MAG93
%P 781-785
%K AI02
%A J. Rouat
%A J. P. Adoul
%T Impact of Vector Quantization for Connected Speech Recognition Systems
%J MAG93
%P 778-780
%K AI05
%A G. G. Pieroni
%A O. G. Johnson
%T A Methodology Visual Recognition of Waves in a Wave Field
%J MAG93
%P 774-77
%K AI06
%A G. J. McMillan
%T Vimad: A Voice Interactive Maintenance Aiding Device
%J MAG93
%P 768-771
%K AI05 AA21
%A D. Laurendeau
%A D. Poussart
%T A Segmentation Algorithm for Extracting 3D Edges from Range Data
%J MAG93
%P 765-767
%K AI06
%A F. Kimura
%A T. Sata
%A K. Kikai
%T A Fast Visual Recognition System of Mechanical Parts by Use of Three
Dimensional Model
%J MAG93
%P 755-759
%K AI06 AA05 AA26
%A M. L. G. Shaw
%A B. R. Gaines
%T The Infrastructure of Fifth Generation Computing
%J MAG93
%P 747-751
%K GA01 AT19
%A W. Doster
%A R. Oed
%T On-line Script Recognition - A Userfriendly Man Machine Interface
%J MAG93
%P 741-743
%K AI06 AA15
%A R. Descout
%T Applications of Speech Technology A Review of the French Experience
%J MAG93
%P 735-740
%K AI05 GA03
%A Y. Ariki
%A K. Wakimoto
%A H. Shieh
%A T. Sakai
%T Automatic Transformation of Drawing Images Based on Geometrical Structures
%J MAG93
%P 719-723
%K AI06 AA05
%A Z. X. Yang
%T On Intelligent Tutoring System for Natural Language
%J MAG93
%P 715-718
%K AI02 AA07
%A L. Xu
%A J. Chen
%T Autobase: A System which Automatically Establishes the Geometry Knowledge
Base
%J MAG93
%P 708-714
%K AI01 AA13
%A G. Pask
%T Applications of Machine Intelligence to Education, Part I Conversation
System
%J MAG93
%P 682
%K AI02 AA07
%A Y. H. Jea
%A W. H. Wang
%T A Unified Knowledge Representation Approach in Designing an Intelligent
tutor
%J MAG93
%P 655-657
%K AA07 AI16
%A I. M. Begg
%T An Intelligent Authoring System
%J MAG93
%P 611-613
%K AA07
%A J. C. Perex
%A R. Castanet
%T Intelligent Robot Simulation System: The Vision Guided Robot Concept
%J MAG93
%P 489-492
%K AI06 AI07
%A B. Mack
%A M. M. Bayoumi
%T An Ultrasonic Obstacle Avoidance System for a Unimate Puma 550 Robot
%J MAG93
%P 481-483
%K AI06 AI07
%A R. A. Browse
%A S. J. Lederman
%T Feature-Based Robotic Tactile Perception
%J MAG93
%P 455-458
%K AI06 AI07
%A R. S. Wall
%T Constrained Example Generation for VLSI Design
%J MAG93
%P 451-454
%K AA04
%A L. P. Demers
%A C. Roy
%A E. Cerney
%A J. Gecsei
%T Integration of VLSI Symbolic Design Tools
%J MAG93
%P 308-312
%K AA04
%A R. Wilson
%T From Signals to Symbols - The Inference Structure of Perception
%J MAG93
%P 221-225
%K AI08 AI06
%A C. Hernandex
%A A. Alonso
%A J. E. Arias
%T Computerized Monitoring as an Aid to Obstetrical Decision Making
%J MAG93
%P 203-206
%K AA01
%A M. M. Gupta
%T Approximate Reasoning in the Evolution of Next Generation of Expert
Systems
%J MAG93
%P 201-202
%K O04 AI01
%A W. Wei-Tsong
%a P. Wei-Min
%T An Effective Searching Approach to Processing Broken Lines in an
Image
%J MAG93
%P 198-200
%K AI06
%A J. F. Sowa
%T Doing Logic on Graphs
%J MAG93
%P 188
%K AI16
%A P. T. Cox
%A T. Pietrxykowski
%T Lograph: A Graphical Logic Programming Language
%J MAG93
%P 145-151
%K AI10
%A D. A. Thomas
%A W. R. Lalonde
%T ACTRA The Design of an Industrial Fifth Generation Smalltalk
%J MAG93
%P 138-140
%A Y. Wada
%A Y. Kobayashi
%A T. Mitsuta
%A T. Kiguchi
%T A Knowledge Based Approach to Automated Pipe-Route Planning in Three-
Dimensional Plant Layout Design
%J MAG93
%P 96-102
%A N. P. Suh
%A S. H. Kim
%T On an Expert System for Design and Manufacturing
%J MAG93
%P 89-95
%K AA26 AA05
%A C. Y. Suen
%A A. Panoutsopoulos
%T Towards a Multi-lingual Character Generator
%J MAG93
%P 86-88
%K AI02
%A K. Shirai
%A Y. Nagai
%A T. Takezawa
%T An Expert System to Design Digital Signal Processors
%J MAG93
%P 83-85
%K AI01 AA04
%A D. Sriram
%A R. Banares-Alcantara
%A V. Venkatasubramnian
%A A. Westerberg
%A M. Rychener
%T Knowledge-Based Expert Systems for Chemical Engineering
%J MAG93
%P 79-82
%K AI01 AA05
%A P. Savard
%A G. Bonneau
%A G. Tremblay
%A R. Cardinal
%A A. R. Leblanc
%A P. Page
%A R. A. Nadeau
%T Interactive Electrophysiologic Mapping System for On-Line Analysis of
Cardiac Activation Sequences
%J MAG93
%P 76-78
%K AA01
%A R. Bisiani
%T VLSI Custom Architectures for Artificial Intelligence
%J MAG93
%P 27-31
%A L. H. Bouchard
%A L. Emirkanian
%T A Formal System for the Relative Clauses in French and its Uses in
CAL
%J MAG93
%P 32-34
%K AI02 AA07
%A G. Bruno
%A A. Elia
%A P. Laface
%T A Rule-Based System for Production Scheduling
%J MAG93
%P 35-39
%K AA05 AI01
%A J. F. Cloarec
%A J. P. Cudelou
%A J. Collet
%T Modeling Switching System Specifications as a Knowledge Base
%J MAG93
%P 40-44
%K AA04
%A B. R. Gaines
%A M. L. G. Shaw
%T Knowledge Engineering for Expert Systems
%J MAG93
%P 45-49
%K AI01
%A B. Hardy
%A P. Bosc
%A A. Chauffaut
%T A Design Environment for Dialogue Oriented Applications
%J MAG93
%P 53-55
%A P. Haren
%A M. Montalban
%T Prototypical Objects for CAD Expert Systems
%J MAG93
%P 53-55
%K AA05 AI01 AI16
%A S. J. Mrchev
%T A Unit Imitating the Functions on the Human Operative Memory
%J MAG93
%P 56-67
%K AI08
%A B. Phillips
%A S. L. Messick
%A M. J. Freiling
%A J. H. Alexander
%T INKA: The INGLISH Knowledge Acquisition Interface for Electronic
Instrument Troubleshooting Systems
%J MAG94
%P 676-682
%K AA04 AI02 AA21
%A D. V. Zelinski
%A R. N. Cronk
%T The ES/AG Environment-Its Development and Use in Expert System Applications
%J MAG94
%P 671-675
%K AI01 T03
%A K. H. Wong
%A F. Fallside
%T Dynamic Programming inthe Recognition of Connected Handwritten Script
%J MAG94
%P 666-670
%K AI06
%A V. R. Waldron
%T Process Tracing as a Method for Initial Knowledge Acquisition
%J MAG94
%P 661-665
%K AI01 AI16
%A H. Van Dyke Parunak
%A B. W. Irish
%A J. Kindrick
%A P. W. Lozo
%T Fractal Actors for Distributed Manufacturing Control
%J MAG94
%P 653-660
%K H03 AA26
%A W. K. Utt
%T Directed Search with Feedback
%J MAG94
%P 647-652
%K AI03
%A J. T. Tou
%A C. L. Huang
%T Recognition of 3-D Objects Via Spatial Understanding of 2-D Images
%J MAG94
%P 641-646
%K AI06
%A P. Snow
%T Tatting Inference Nets with Bayes Theorem
%J MAG94
%P 635-640
%K AI16 O04
%A Y. Shoham
%T Reasoning About Causation in Knowledge-Based Systems
%J MAG94
%P 629-634
%K AI16
%A H. C. Shen
%A G. F. P. Signarowski
%T A Knowledge Representation for Roving Robots
%J MAG94
%P 629-634
%K AI07 AI16 AA19
%A D. Schwartz
%T One Cornerstone in the Mathematical Foundations for a System of Fuzzy-
Logic Programming
%J MAG94
%P 618-620
%K AI10 O04
%A P. R. Schaefer
%A I. H. Bozma
%A R. D. Beer
%T Extended Production Rules for Validity Maintenance
%J MAG94
%P 613-617
%K AI01 AI15
%A M. C. Rowe
%A R. Keener
%A A. Veitch
%A R. B. Lantz
%T E. T. Expert Technician/Experience Trapper
%J MAG94
%P 607-612
%K AA04 AA21
%A C. E. Riese
%A S. M. Zubrick
%T Using Rule Induction to Combine Declarative and Procedural Knowledge
Representations
%J MAG94
%P 603-606
%K AI16
%A D. S. Prerau
%A A. S. Gunderson
%A R. E. Reinke
%A S. K. Goyal
%T The COMPASS Expert System: Verification, Technology Transfer, and
Expansion
%J MAG94
%P 597-602
%K AI01
%A B. Pinkowski
%T A Lisp-Based System for Generating Diagnostic Keys
%J MAG94
%P 592-596
%K T01 AA21
%A S. R. Mukherjee
%A M. Sloan
%T Positional Representation of English Words
%J MAG94
%P 587-591
%K AI02
%A J. H. Martin
%T Knowledge Acquisition Through Natural Language Dialogue
%J MAG94
%P 582-586
%K AI01 AI02
%A D. M. Mark
%T Finding Simple Routes; "Ease of Description" as an Objective Function
in Automated Route Selection
%J MAG94
%P 577-581
%A S. Mahalingam
%A D. D. Sharma
%T WELDEX-An Expert System for Nondestructive Testing of Welds
%J MAG94
%P 572-576
%K AI01 AA05 AA21
%A J. Liebowitz
%T Evaluation of Expert Systems: An Approach and Case Study
%J MAG94
%P 564-571
%K AI01
%A S. J. Laskowski
%A H. J. Antonisse
%A R. P. Bonasso
%T Analyst II: A Knowledge-Based Intelligence Support System
%J MAG94
%P 558-563
%K AA18
%A Ronald Baecker
%A william Buxton
%T Readings in Human-Computer Interaction: A Multidisciplinary Approach
%I Morgan Kaufmann
%C Los Altos, California
%D 1986
%X 650 pages ISBN 0-934613-24-9 paperbound $26.95
%T Proceedings: Graphics Interface '86/Vision Interface '86
%I Morgan Kaufmann
%C Los Altos, California
%D 1986
%K AI06
%X 402 pages paper bound ISSN 0713-5424 $35.00
%A Peter Politakis
%T Empirical Analysis for Expert Systems
%I Morgan Kaufmann
%C Los Altos, California
%D 1985
%K AI01 AA01 rheumatology
%X 187 pages paperbound ISBN 0-273-08663-4 $22.95
.br
Describes SEEK which was used to develop an expert system for
rheumatology
%A David Brown
%A B. Chandrasekaran
%T Design Problem Solving: Knowledge Structures and Control Strategies
%I Morgan Kaufmann
%C Los Altos, California
%D 1986
%K AA05
%X 200 pages paperbound ISBN 0-934613-07-9 $22.95
%A W. Lewis Johnson
%T Intention-Based Diagnosis of Errors in Novice Programs
%I Morgan Kaufmann
%C Los Altos, California
%D 1986
%K AA07 AA08 Proust
%X 1986, 333 pages, ISBN 0-934613-19-2
%A Etienne Wenger
%T Artificial Intelligence and Tutoring Systems: Computational Approaches
to the Communication of Knowledge
%I Morgan Kaufmann
%C Los Altos, California
%D Winter 1986-1987
%K AA07 AI02
%X 350 pages, hardbound, ISBN 0-934613-26-5
%A John Kender
%T Shape From Texture
%I Morgan Kaufmann
%C Los Altos, California
%D 1986
%K AI06
%X paperbound, ISBN 0-934613-05-2 $22.95
%A David Touretzky
%T The Mathematics of Inheritance Systems
%I Morgan Kaufmann
%C Los Altos, California
%D 1986
%K AI16
%X paperbound, 220 pages, ISBN 0-934613-06-0 $22.95
%A Ernest Davis
%T Representing and Acquiring Geographic Knowledge
%I Morgan Kaufmann
%C Los Altos, California
%D 1986
%K AI16
%X paperbound, 240 pages, ISBN 0-934613-22-2 $22.95
------------------------------
End of AIList Digest
********************
∂24-Oct-86 1125 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #234
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 24 Oct 86 11:25:49 PDT
Date: Thu 23 Oct 1986 22:15-PDT
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #234
To: AIList@SRI-STRIPE
AIList Digest Friday, 24 Oct 1986 Volume 4 : Issue 234
Today's Topics:
Philosophy - Intelligence, Understanding
----------------------------------------------------------------------
Date: Wed, 22 Oct 86 09:49 CDT
From: From the desk of Daniel Paul
<"NGSTL1::DANNY%ti-eg.csnet"@CSNET-RELAY.ARPA>
Subject: AI vs. RI
In the last AI digest (V4 #226), Daniel Simon writes:
>One question you haven't addressed is the relationship between intelligence and
>"human performance". Are the two synonymous? If so, why bother to make
>artificial humans when making natural ones is so much easier (not to mention
>more fun)?
This is a question that has been bothering me for a while. When it is so much
cheaper (and possible now, while true machine intelligence may be just a dream)
why are we wasting time training machines when we could be training humans in-
stead. The only reasons that I can see are that intelligent systems can be made
small enough and light enough to sit on bombs. Are there any other reasons?
Daniel Paul
danny%ngstl1%ti-eg@csnet-relay
------------------------------
Date: 21 Oct 86 14:43:22 GMT
From: ritcv!rocksvax!rocksanne!sunybcs!colonel@rochester.arpa (Col.
G. L. Sicherman)
Subject: Re: extended Turing test
> It is not always clear which of the two components a sceptic is
> worrying about. It's usually (ii), because who can quarrel with the
> principle that a veridical model should have all of our performance
> capacities?
Did somebody call me? Anyway, it's misleading to propose that a
veridical model of ←our← behavior ought to have our "performance
capacities." Function and performance are relative to the user;
in a human context they have no meaning, except to the extent that
we can be said to "use" one another. This context is political
rather than philosophical.
I do not (yet) quarrel with the principle that the model ought to
have our abilities. But to speak of "performance capacities" is
to subtly distort the fundamental problem. We are not performers!
POZZO: He used to dance the farandole, the fling, the brawl, the jig,
the fandango and even the hornpipe. He capered. For joy. Now
that's the best he can do. Do you know what he calls it?
ESTRAGON: The Scapegoat's Agony.
VLADIMIR: The Hard Stool.
POZZO: The Net. He thinks he's entangled in a net.
--S. Beckett, ←Waiting for Godot←
--
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: colonel@sunybcs, csdsiche@sunyabvc
------------------------------
Date: 21 Oct 86 14:57:12 GMT
From: ritcv!rocksvax!rocksanne!sunybcs!colonel@rochester.arpa (Col.
G. L. Sicherman)
Subject: Re: Searle & ducks
> I. What is "understanding", or "ducking" the issue...
>
> If it looks like a duck, swims like a duck, and
> quacks like a duck, then it is *called* a duck. If you cut it open and
> find that the organs are something other than a duck's, *then*
> maybe it shouldn't be called a duck. What it should be called becomes
> open to discussion (maybe dinner).
>
> The same principle applies to "understanding".
No, this principle applies only to "facts"--things that anybody can
observe, in more or less the same way. If you say, "Look! A duck!"
and everybody else says "I don't see anything," what are you to believe?
If it feels like a bellyache, don't conclude that it's a bellyache.
There may be an inner meaning to deal with! Appendicitis, gallstones,
trichinosis, you've been poisoned, Cthulhu is due any minute ...
This kind of argument always arises when technology develops new
capabilities. Bell: "Listen! My machine can talk!" Epiktistes: "No,
it can only reproduce the speech of somebody else." It's something
new--we must argue over what to call it. Any name we give it will
be metaphorical, invoking an analogy with human behavior, or something
else. The bottom line is that the thing is not a man; no amount of
simulation and dissimulation will change that.
When people talk of Ghosts I don't mention the Apparition by which I
am haunted, the Phantom that shadows me about the streets, the image
or spectre, so familiar, so like myself, which lurks in the plate-
glass of shop-windows, or leaps out of mirrors to waylay me.
--L. P. Smith
--
Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: colonel@sunybcs, csdsiche@sunyabvc
------------------------------
Date: 21 Oct 86 16:47:53 GMT
From: ssc-vax!bcsaic!michaelm@BEAVER.CS.WASHINGTON.EDU
Subject: Re: Searle, Turing, Symbols, Categories
>Stevan Harnad writes:
> ...The objective of the turing test is to judge whether the candidate
> has a mind, not whether it is human or drinks motor oil.
In a related vein, if I recall my history correctly, the Turing test has been
applied several times in history. One occasion was the encounter between the
New World and the Old. I believe there was considerable speculation on the
part of certain European groups (fueled, one imagines, by economic motives) as
to whether the American Indians had souls. The (Catholic) church ruled that
they did, effectively putting an end to the controversy. The question of
whether they had souls was the historical equivalent to the question of
whether they had mind and/or intelligence, I suppose.
I believe the Turing test was also applied to oranguatans, although I don't
recall the details (except that the orangutans flunked).
As an interesting thought experiment, suppose a Turing test were done with a
robot made to look like a human, and a human being who didn't speak English--
both over a CCTV, say, so you couldn't touch them to see which one was soft,
etc. What would the robot have to do in order to pass itself off as human?
--
Mike Maxwell
Boeing Advanced Technology Center
...uw-beaver!uw-june!bcsaic!michaelm
------------------------------
Date: 21 Oct 86 13:29:09 GMT
From: mcvax!ukc!its63b!hwcs!aimmi!gilbert@seismo.css.gov (Gilbert
Cockton)
Subject: Re: Searle, AI, NLP, understanding, ducks
In article <1919@well.UUCP> jjacobs@well.UUCP (Jeffrey Jacobs) writes:
>
>Most so-called "understanding" is the result of training and
>education. We are taught "procedures" to follow to
>arrive at a desired result/conclusion. Education is primarily a
>matter of teaching "procedures", whether it be mathematics, chemistry
>or creative writing. The *better* understood the field, the more "formal"
>the procedures. Mathematics is very well understood, and
>consists almost entirely of "formal procedures".
This is contentious and smacks of modelling all learning procedures
in terms of a single subject, i.e. mathematics. I can't think of a
more horrible subject to model human understanding on, given the
inhumanity of most mathematics!
Someone with as little as a week of curriculum studies could flatten
this assertion instantly. NO respectable curriculum theory holds that
there is a single form of knowledge to which all bodies of human
experience conform with decreasing measures of formal success. In the
UK, it is official curriculum policy to initiate children into
several `forms' of knowledge (mathematics, physical science,
technology, humanities, aesthetics, religion and the other one).
The degree to which "understanding" is accepted as procedural rote
learning varies from discipline to discipline. Your unsupported
equivalence between understanding and formality ("The *better* understood the
field, the more "formal" the procedures") would not last long in the
hands of social and religious studies, history, literature, craft/design
and technology or art teachers. Despite advances in LISP and
connection machines, no-one has yet formally modelled any of these areas to
the satisfaction of their skilled practitioners. I find it strange
that AI workers who would struggle to write a history/literature/design
essay to the satisfaction of a recognised authority are naive enough to believe
that they could program a machine to write one.
Many educational psychologists and experienced teachers would completely
reject your assertions on the ground that unpersonalised cookbook-style
passively-internalised formalisms, far from being a sign of understanding,
actually constitute the exact opposite of understanding. For me, the term
`understanding' cannot be applied to anything that someone has learnt until
they can act on this knowledge within the REAL world (no text book
problems or ineffective design rituals), justify their action in terms of this
knowledge and finally demonstrate integration of the new knowledge with their
existing views of the world (put it in their own words).
Finally, your passive view of understanding cannot explain creative
thought. Granted, you say `Most so-called "understanding"', but I
would challenge any view that creative thought is exceptional -
the mark of great and noble scientists who cannot yet be modelled by
LISP programs. On the contrary, much of our daily lives has to be
highly creative because our poor understanding of the world forces us to
creatively fill in the gaps left by our inadequate formal education.
Show me one engineer who has ever designed something from start to
finish 100% according to the book. Even where design codes exist, as
in bridge-building, much is left to the imagination. No formal prescription
of behaviour will ever fully constrain the way a human will act.
In situations where it is meant to, such as the military, folk spend a
lot of time pretending either to have done exactly what they were told
or to have said exactly what they wanted to be done. Nearer to home, find me
one computer programmer who's understanding is based 100% on formal procedures.
Even the most formal programmers will be lucky to be in program-proving mode
more than 60% of the time. So I take it that they don't `understand' what
they're doing the other 40% of the time? Maybe, but if this is the case, then
all we've revealed are differences in our dictionaries. Who gave you the
formal procedure for ascribing meaning to the word "understanding"?
>This leads to the obvious conclusion that humans do not
>*understand* natural language very well.
>The lack of understanding of natural languages is also empirically
>demonstrable. Confusion about the meaning
>of a person's words, intentions etc can be seen in every interaction
... over the net!
Words MEAN something, and what they do mean is relative to the speakers and
the situation. The lack of formal procedures has NOTHING to do with
breakdowns in inter-subjective understanding. It is wholly due to
inabilities to view and describe the world in terms other than one's own.
--
Gilbert Cockton, Scottish HCI Centre, Ben Line Building, Edinburgh, EH1 1TN
JANET: gilbert@uk.ac.hw.aimmi ARPA: gilbert%aimmi.hw.ac.uk@cs.ucl.ac.uk
UUCP: ..!{backbone}!aimmi.hw.ac.uk!gilbert
------------------------------
End of AIList Digest
********************
∂26-Oct-86 2349 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #235
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 26 Oct 86 23:49:16 PST
Date: Sun 26 Oct 1986 22:10-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #235
To: AIList@SRI-STRIPE
AIList Digest Monday, 27 Oct 1986 Volume 4 : Issue 235
Today's Topics:
Queries - GURU & Knowledge-Based Management Tools &
IF/PROLOG Memory Expansion,
Binding - Integrated Inference Machines,
Philosophy - The Analog/Digital Distinction,
Bibliography - AI Lab Technical Reports
----------------------------------------------------------------------
Date: 23 Oct 86 02:21:50 GMT
From: v6m%psuvm.bitnet@ucbvax.Berkeley.EDU
Subject: OPINIONS REQUESTED ON GURU
I'D APPRECIATE ANY COMMENTS THE GROUP HAS ON THE AI BASED PACKAGE <GURU>.
VINCENT MARCHIONNI
V6M AT PSUVM VIA BITNET
OR
ACIG
1 VALLEY FORGE PLAZA
VALLEY FORGE PA 19487
THANKS VINCE
------------------------------
Date: 24 Oct 1986 21:35-EDT
From: cross@afit-ab
Subject: Knowledge-based management tools query
Are there any pc-based shells that integrate simple rule bases, data
base systems (ala DBASE III), spreedsheets? Does anyone have any
experience with GURU? Any information would be appreciated. Will
be starting some work here towards the design of an intelligent
assistant for a program manager. Any pointers to papers or other
references would also be appreciated. Thanks in advance.
Steve Cross
------------------------------
Date: 24 Oct 86 16:09:05 GMT
From: dual!islenet!humu!uhmanoa!aloha1!shee@ucbvax.Berkeley.EDU (shee)
Subject: ifprolog.
We have if/prolog version 3.0 on unix operating system on HP-9000 machine. We
are looking for ways to increase the memory capacity of if/prolog so that there
is no stack overflow for our knowledge←based ai programs.
------------------------------
Date: Sun, 26 Oct 86 00:56:18 edt
From: gatech!ldi@rayssd.ray.com (Louis P. DiPalma)
Subject: Re: Address???
Address for Integrated Inference Machines is as follows:
Integrated Inference Machines
1468 E. Katella Avenue
Anaheim, California 92805
Phone: (714) 978-6776
------------------------------
Date: 23 Oct 86 17:20:00 GMT
From: hp-pcd!orstcs!tgd@hplabs.hp.com (tgd)
Subject: Re: The Analog/Digital Distinction: Soli
Here is a rough try at defining the analog vs. digital distinction.
In any representation, certain properties of the representational medium are
exploited to carry information. Digital representations tend to exploit
fewer properties of the medium. For example, in digital electronics, a 0
could be defined as anything below .2volts and a 1 as anything above 4volts.
This is a simple distinction. An analog representation of a signal (e.g.,
in an audio amplifier) requires a much finer grain of distinctions--it
exploits the continuity of voltage to represent, for example, the loudness
of a sound.
A related notion of digital and analog can be obtained by considering what
kinds of transformations can be applied without losing information. Digital
signals can generally be transformed in more ways--precisely because they do
not exploit as many properties of the representational medium. Hence, if we
add .1volts to a digital 0 as defined above, the result will either still be
0 or else be undefined (and hence detectable). A digital 1 remains
unchanged under addition of .1volts. However, the analog signal would be
changed under ANY addition of voltage.
--Tom Dietterich
------------------------------
Date: Wed 22 Oct 86 09:38:53-CDT
From: AI.CHRISSIE@R20.UTEXAS.EDU
Subject: AI Lab Technical Reports
[Forwarded from the UTexas-20 bboard by Laws@SRI-STRIPE.]
Following is a listing of the reports available from the AI Lab.
Reports are available from Chrissie in Taylor Hall 4.130D. An annotated
list is also available upon request either on-line or hardcopy.
TECHNICAL REPORT LISTING
Artificial Intelligence Laboratory
University of Texas at Austin
Taylor Hall 2.124
Austin, Texas 78712
(512) 471-9562
September 1986
All reports furnished free of charge
AI84-01 Artificial Intelligence Project at The University of Texas at Austin,
Gordon S. Novak and Robert L. Causey, et al., 1984.
AI84-02 Computing Discourse Conceptual Coherence: A Means to Contextual
Reference Resolution, Ezat Karimi, August 1984.
AI84-03 Translating Horn Clauses From English, Yeong-Ho Yu, August 1984.
AI84-04 From Menus to Intentions in Man-Machine Dialogue, Robert F. Simmons,
November 1984.
AI84-05 A Text Knowledge Base for the AI Handbook, Robert F. Simmons,
December 1983.
AI85-02 Knowledge Based Contextual Reference Resolution for Text
Understanding, Michael Kavanaugh Smith, January 1985.
AI85-03 Learning Problem Solving: A Proposal for Continued Research, Bruce
W. Porter, March 1985.
AI85-04 Using and Revising Learned Concept Models: A Research Proposal, Bruce
W. Porter, May 1985.
AI85-05 A Self Organizing Retrieval System for Graphs, Robert A. Levinson,
May 1985.
AI85-06 Lisp Programming Lecture Notes, Gordon S. Novak, Jr., July 1985.
AI85-07 Heuristic and Formal Methods in Automatic Program Debugging, William
R. Murray, June 1985. (To appear in IJCAI85 Proceedings.)
AI85-08 A General Heuristic Bottom-up Procedure for Searching AND/OR Graphs,
Vipin Kumar, August 1985.
AI85-09 A General Paradigm for AND/OR Graph and Game Tree Search. Vipin
Kumar, August 1985.
AI85-10 Parallel Processing for Artificial Intelligence, Vipin Kumar, 1985.
AI85-11 Branch-AND-Bound Search, Vipin Kumar, 1985.
AI85-12 Computational Treatment of Metaphor in Text Understanding: A First
Approach, Olivier Winghart, August 1985.
AI85-13 Computer Science and Medical Information Retrieval, Robert Simmons,
1985.
AI85-14 Technologies for Machine Translation, Robert Simmons, August 1985.
AI85-15 The Knower's Paradox and the Logics of Attitudes, Nicholas Asher and
Hans Kamp, August 1985.
AI85-16 Negotiated Interfaces for Software Reusability, Rick Hill, December
1985.
AI85-17 The Map-Learning Critter, Benjamin J. Kuipers, December 1985.
AI85-18 Menu-Based Creation of Procedures for Display of Data, Man-Lee Wan,
December 1985.
AI85-19 Explanation of Mechanical Systems Through Qualitative Simulation,
Stuart Laughton, December 1985.
AI86-20 Experimental Goal Regression: A Method for Learning Problem Solving
Heuristics, Bruce W. Porter and Dennis Kibler, January 1986.
AI86-21 GT: A Conjecture Generator for Graph Theory, Wing-Kwong Wong,
January 1986.
AI86-22 An Intelligent Backtracking Algorithm for Parallel Execution of Logic
Programs, Yow-Jian Lin, Vipin Kumar and Clement Leung, March 1986.
AI86-23 A Parallel Execution Scheme for Exploiting AND-parallelism of Logic
Programs, Yow-Jian Lin and Vipin Kumar, March 1986.
AI86-24 Qualitative Simulation as Causal Explanation, Benjamin J. Kuipers,
April 1986.
AI86-25 Fault Diagnosis Using Qualitative Simulation, Ray Bareiss and Adam
Farquhar, April 1986.
AI86-26 Symmetric Rules for Translation of English and Chinese, Wanying Jin
and Robert F. Simmons, May 1986.
AI86-27 Automatic Program Debugging for Intelligent Tutoring Systems, William
R. Murray, June, 1986. (PhD dissertation)
AI86-28 The Role of Inversion, Clecting and PP-Fronting in Relating Discourse
Elements, Mark V. Lapolla, July 1986.
AI86-29 A Theory of Argument Coherence, Wing-Kwong C. Wong, July 1986.
AI86-30 Metaphorical Shift and The Induction of Similarities, Phillipe
M. Alcouffe, July 1986. (Master's thesis)
AI86-31 A Rule Language for the GLISP Programming System, Christopher
A. Rath, August 1986. (Master's thesis)
AI86-32 Talus: Automatic Program Debugging for Intelligent Tutoring Systems,
William R. Murray, August 1986.
AI86-33 New Algorithms for Dependency-Directed Backtracking, Charles
J. Petrie, September, 1986. (Master's thesis)
AI86-34 An Execution Model for Exploiting AND-Parallelism in Logic Programs,
Yow-Jian Lin and Vipin Kumar, September 1986.
AI86-35 PROTOS: An Experiment in Knowledge Acquisition for Heuristic
Classification Tasks, Bruce W. Porter and E. Ray Bareiss, August
1986.
------------------------------
End of AIList Digest
********************
∂27-Oct-86 0145 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #236
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 27 Oct 86 01:44:12 PST
Date: Sun 26 Oct 1986 22:18-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #236
To: AIList@SRI-STRIPE
AIList Digest Monday, 27 Oct 1986 Volume 4 : Issue 236
Today's Topics:
Administrivia - Mod.ai Followup Problem,
Philosophy - Replies from Stevan Harnad to Mozes, Cugini, and Kalish
----------------------------------------------------------------------
Date: Sun 26 Oct 86 17:11:45-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Mod.ai Followup Problem
The following five messages are replies by Stevan Harnad to some of the
items that have appeared in AIList. These five had not made it to the
AIList@SRI-STRIPE mailbox and so were never forwarded to the digest or
to mod.ai. Our current hypothesis is that the Usenet readnews command
does not correctly deliver followup ("f") messages when used to reply
to mod.ai items. Readers with this problem can send replies to net.ai
or to sri-stripe!ailist.
-- Kenneth Laws
------------------------------
Date: Mon, 27 Oct 86 00:15:36 est
From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad)
Subject: for posting on mod.ai (reply to E. Mozes, reconstructed)
On mod.ai, in Message-ID: <8610160605.AA09268@ucbvax.Berkeley.EDU>
on 16 Oct 86 06:05:38 GMT, eyal@wisdom.BITNET (Eyal mozes) writes:
> I don't see your point at all about "categorical
> perception". You say that "differences between reds and differences
> between yellows look much smaller than equal-sized differences that
> cross the red/yellow boundary". But if they look much smaller, this
> means they're NOT "equal-sized"; the differences in wave-length may be
> the same, but the differences in COLOR are much smaller.
There seems to be a problem here, and I'm afraid it might be the
mind/body problem. I'm not completely sure what you mean. If all
you mean is that sometimes equal-sized differences in inputs can be
made unequal by internal differences in how they are encoded, embodied
or represented -- i.e., that internal physical differences of some
sort may mediate the perceived inequalities -- then I of course agree.
There are indeed innate color-detecting structures. Moreover, it is
the hypothesis of the paper under discussion that such internal
categorical representations can also arise as a consequence of
learning.
If what you mean, however, is that there exist qualitative differences among
equal-sized input differences with no internal physical counterpart, and
that these are in fact mediated by the intrinsic nature of phenomenological
COLOR -- that discontinuous qualitative inequalities can occur when
everything physical involved, external and internal, is continuous and
equal -- then I am afraid I cannot follow you.
My own position on color quality -- i.e., "what it's like" to
experience red, etc. -- is that it is best ignored, methodologically.
Psychophysical modeling is better off restricting itself to what we CAN
hope to handle, namely, relative and absolute judgments: What differences
can we tell apart in pairwise comparison (relative discrimination) and
what stimuli or objects can we label or identify (absolute
discrimination)? We have our hands full modeling this. Further
concerns about trying to capture the qualitative nature of perception,
over and above its performance consequences [the Total Turing Test]
are, I believe, futile.
This position can be dubbed "methodological epiphenomenalism." It amounts
to saying that the best empirical theory of mind that we can hope to come
up with will always be JUST AS TRUE of devices that actual have qualitative
experiences (i.e., are conscious) as of devices that behave EXACTLY AS IF
they had qualitative experiences (i.e., turing-indistinguishably), but do
not (if such insentient look-alikes are possible). The position is argued
in detail in the papers under discussion.
> Your whole theory is based on the assumption that perceptual qualities
> are something physical in the outside world (e.g., that colors ARE
> wave-lengths). But this is wrong. Perceptual qualities represent the
> form in which we perceive external objects, and they're determined both
> by external physical conditions and by the physical structure of our
> sensory apparatus; thus, colors are determined both by wave-lengths and
> by the physical structure of our visual system. So there's no apriori
> reason to expect that equal-sized differences in wave-length will lead
> to equal-sized differences in color, or to assume that deviations from
> this rule must be caused by internal representations of categories. And
> this seems to completely cut the grounds from under your theory.
Again, there is nothing for me to disagree with if you're saying that
perceived discontinuities are mediated by either external or internal
physical discontinuities. In modeling the induction and representation
of categories, I am modeling the physical sources of such
discontinuities. But there's still an ambiguity in what you seem to be
saying, and I don't think I'm mistaken if I think I detect a note of
dualism in it. It all hinges on what you mean by "outside world." If
you only mean what's physically outside the device in question, then of
course perceptual qualities cannot be equated with that. It's internal
physical differences that matter.
But that doesn't seem to be all you mean by "outside world." You seem
to mean that the whole of the physical world is somehow "outside" conscious
perception. What else can you mean by the statement that "perceptual
qualities represent the form [?] in which we perceive external
objects" or that "there's no...reason to expect that...[perceptual]
deviations from [physical equality]...must be caused by internal
representations of categories."
Perhaps I have misunderstood, but either this is just a reminder that
there are internal physical differences one must take into account too
in modeling the induction and representation of categories (but then
they are indeed taken into account in the papers under discussion, and
I can't imagine why you would think they would "completely cut the
ground from under" my theory) or else you are saying something metaphysical
with which I cannot agree.
One last possibility may have to do with what you mean by
"representation." I use the word eclectically, especially because the
papers are arguing for a hybrid representation, with the symbolic
component grounded in the nonsymbolic. So I can even agree with you
that I doubt that mere symbolic differences are likely to be the sole
cause of psychophysical discontinuities, although, being physically
embodied, they are in principle sufficient. I hypothesize, though,
that nonsymbolic differences are also involved in psychophysical
discontinuities.
> My second criticism is that, even if "categorical perception" really
> provided a base for a theory of categorization, it would be very
> limited; it would apply only to categories of perceptual qualities. I
> can't see how you'd apply your approach to a category such as "table",
> let alone "justice".
How abstract categories can be grounded "bottom-up" in concrete psychophysical
categories is the central theme of the papers under discussion. Your remarks
were based only on the summaries and abstracts of those papers. By now I
hope the preprints have reached you, as you requested, and that your
question has been satisfactorily answered. To summarize "grounding"
briefly: According to the model, (learned) concrete psychophysical categories
are formed from sampling positive and negative instances of a category
and then encoding the invariant information that will reliably identify
further instances. This might be how one learned the concrete
categories "horse" and "striped" for example. The (concrete) category
"zebra" could then be learned without need for direct perceptual
ACQUAINTANCE with the positive and negative instances by simply being
told that a zebra is a striped horse. That is, the category can
be learned by symbolic DESCRIPTION by merely recombining the labels of
the already-grounded perceptual categories.
All categorization involves some abstraction and generalization (even
"horse," and certainly "striped" did), so abstract categories such as
"goodness," "truth" and "justice" could be learned and represented by
recursion on already grounded categories, their labels and their
underlying representations. (I have no idea why you think I'd have a
problem with "table.")
> Actually, there already exists a theory of categorization that is along
> similar lines to your approach, but integrated with a detailed theory
> of perception and not subject to the two criticisms above; that is the
> Objectivist theory of concepts. It was presented by Ayn Rand... and by
> David Kelley...
Thanks for the reference, but I'd be amazed to see an implementable,
testable model of categorization performance issue from that source...
Stevan Harnad
{allegra, bellcore, seismo, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: Sun, 26 Oct 86 11:05:47 est
From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad)
Subject: Please post on mod.ai -- first of 4 (cugini)
In Message-ID: <8610190504.AA08059@ucbvax.Berkeley.EDU> on mod.ai
CUGINI, JOHN <cugini@nbs-vms.ARPA> replies to my claim that
>> there is no rational reason for being more sceptical about robots'
>> minds (if we can't tell their performance apart from that of people)
>> than about (other) peoples' minds.
with the following:
> One (rationally) believes other people are conscious BOTH because
> of their performance and because their internal stuff is a lot like
> one's own.
This is a very important point and a subtle one, so I want to make
sure that my position is explicit and clear: I am not denying that
there exist some objective data that correlate with having a mind
(consciousness) over and above performance data. In particular,
there's (1) the way we look and (2) the fact that we have brains. What
I am denying is that this is relevant to our intuitions about who has a
mind and why. I claim that our intuitive sense of who has a mind is
COMPLETELY based on performance, and our reason can do no better. These
other correlates are only inessential afterthoughts, and it's irrational
to take them as criteria.
My supporting argument is very simple: We have absolutely no intuitive
FUNCTIONAL ideas about how our brains work. (If we did, we'd have long
since spun an implementable brain theory from our introspective
armchairs.) Consequently, our belief that brains are evidence of minds and
that the absence of a brain is evidence of the absence of a mind is based
on a superficial black-box correlation. It is no more rational than
being biased by any other aspect of appearance, such as the color of
the skin, the shape of the eyes or even the presence or absence of a tail.
To put it in the starkest terms possible: We wouldn't know what device
was and was not relevantly brain-like if it was staring us in the face
-- EXCEPT IF IT HAD OUR PERFORMANCE CAPACITIES (i.e., it could pass
the Total Turing Test). That's the only thing our intuitions have to
go on, and our reason has nothing more to offer either.
To take one last pass at setting the relevant intuitions: We know what
it's like to DO (and be able to do) certain things. Similar
performance capacity is our basis for inferring that what it's like
for me is what it's like for you (or it). We do not know anything
about HOW we do any of those things, or about what would count as the
right way and the wrong way (functionally speaking). Inferring that
another entity has a mind is an intuitive judgment based on performance.
It's called the (total) turing test. Inferring HOW other entities
accomplish their performance is ordinary scientific inference. We're in
no rational position to prejudge this profound and substantive issue on
the basis of the appearance of a lump of grey jelly to our untutored but
superstitious minds.
> [W]e DO have some idea about the functional basis for mind, namely
> that it depends on the brain (at least more than on the pancreas, say).
> This is not to contend that there might not be other bases, but for
> now ALL the minds we know of are brain-based, and it's just not
> dazzlingly clear whether this is an incidental fact or somewhat
> more deeply entrenched.
The question isn't whether the fact is incidental, but what its
relevant functional basis is. In other words, what is it about he
brain that's relevant and what incidental? We need the causal basis
for the correlation, and that calls for a hefty piece of creative
scientific inference (probably in theoretical bio-engineering). The
pancreas is no problem, because it can't generate the brain's
performance capacities. But it is simply begging the question to say
that brain-likeness is an EXTRA relevant source of information in
turing-testing robots, when we have no idea what's relevantly brain-like.
People were sure (as sure as they'll ever be) that other people had
minds long before they ever discovered they had brains. I myself believed
the brain was just a figure of speech for the first dozen or so years of
my life. Perhaps there are people who don't learn or believe the news
throughout their entire lifetimes. Do you think these people KNOW any
less than we do about what does or doesn't have a mind? Besides, how
many people do you think could really pick out a brain from a pancreas
anyway? And even those who can have absolutely no idea what it is
about the brain that makes it conscious; and whether a cow's brain or
a horse-shoe crab's has it; or whether any other device, artificial or
natural, has it or lacks it, or why. In the end everyone must revert to
the fact that a brain is as a brain does.
> Why is consciousness a red herring just because it adds a level
> of uncertainty?
Perhaps I should have said indeterminacy. If my arguments for
performance-indiscernibility (the turing test) as our only objective
basis for inferring mind are correct, then there is a level of
underdetermination here that is in no way comparable to that of, say,
the unobservable theoretical entities of physics (say, quarks, or, to
be more trendy, perhaps strings). Ordinary underdetermination goes
like this: How do I know that your theory's right about the existence
and presence of strings? Because WITH them the theory succeeds in
accounting for all the objective data (let's pretend), and without
them it does not. Strings are not "forced" by the data, and other
rival theories may be possible that work without them. But until these
rivals are put forward, normal science says strings are "real" (modulo
ordinary underdetermination).
Now try to run that through for consciousness: How do I know that your
theory's right about the existence and presence of consciousness (i.e.,
that your model has a mind)? "Because its performance is
turing-indistinguishable from that of creatures that have minds." Is
your theory dualistic? Does it give consciousness an independent,
nonphysical, causal role? "Goodness, no!" Well then, wouldn't it fit
the objective data just as well (indeed, turing-indistinguishably)
without consciousness? "Well..."
That's indeterminacy, or radical underdetermination, or what have you.
And that's why consciousness is a methodological red herring.
> Even though any correlations will ultimately be grounded on one side
> by introspection reports, it does not follow that we will never know,
> with reasonable assurance, which aspects of the brain are necessary for
> consciousness and which are incidental...Now at some level of difficulty
> and abstraction, you can always engineer anything with anything... But
> the "multi-realizability" argument has force only if its obvious
> (which it ain't) that the structure of the brain at a fairly high
> level (eg neuron networks, rather than molecules), high enough to be
> duplicated by electronics, is what's important for consciousness.
We'll certainly learn more about the correlation between brain
function and consciousness, and even about the causal (functional)
basis of the correlation. But the correlation will really be between
function and performance capacity, and the rest will remain the intuitive
inference or leap of faith it always was. And since ascertaining what
is relevant about brain function and what is incidental cannot depend
simply on its BEING brain function, but must instead depend, as usual, on
the performance criterion, we're back where we started. (What do you
think is the basis for our confidence in introspective reports? And
what are you going to say about robots'introspective reports...?)
I don't know what you mean, by the way, about always being able to
"engineer anything with anything at some level of abstraction." Can
anyone engineer something to pass the robotic version of the Total
Turing Test right now? And what's that "level of abstraction" stuff?
Robots have to do their thing in the real world. And if my
groundedness arguments are valid, that ain't all done with symbols
(plus add-on peripheral modules).
Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: Sun, 26 Oct 86 11:11:08 est
From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad)
Subject: For posting on mod.ai - 2nd of 4 (reply to Kalish)
In mod.ai, Message-ID: <861016-071607-4573@Xerox>,
"charles←kalish.EdServices"@XEROX.COM writes:
> About Stevan Harnad's two kinds of Turing tests [linguistic
> vs. robotic]: I can't really see what difference the I/O methods
> of your system makes. It seems that the relevant issue is what
> kind of representation of the world it has.
I agree that what's at issue is what kind of representation of the
world the system has. But you are prejudging "representation" to mean
only symbolic representation, whereas the burden of the papers in
question is to show that symbolic representations are "ungrounded" and
must be grounded in nonsymbolic processes (nonmodularly -- i.e., NOT
by merely tacking on autonomous peripherals).
> While I agree that, to really understand, the system would need some
> non-purely conventional representation (not semantic if "semantic"
> means "not operable on in a formal way" as I believe [given the brain
> is a physical system] all mental processes are formal then "semantic"
> just means governed by a process we don't understand yet), giving and
> getting through certain kinds of I/O doesn't make much difference.
"Non-purely conventional representation"? Sounds mysterious. I've
tried to make a concrete proposal as to just what that hybrid
representation should be like.
"All mental processes are formal"? Sounds like prejudging the issue again.
It may help to be explicit about what one means by formal/symbolic:
Symbolic processing is the manipulation of (arbitrary) physical tokens
in virtue of their shape on the basis of formal rules. This is also
called syntactic processing. The formal goings-on are also
"semantically interpretable" -- they have meanings; they are connected
to objects in the outside world that they are about. The Searle
problem is that so far the only devices that do semantic
interpretations intrinsically are ourselves. My proposal is that
grounding the representations nonmodularly in the I/O connection may provide
the requisite intrinsic semantics. This may be the "process we don't
understand yet." But it means giving up the idea that "all mental
processes are formal" (which in any case does not follow, at least on
the present definition of "formal," from the fact that "the brain is a
physical system").
> Two for instances: SHRDLU operated on a simulated blocks world. The
> modifications to make it operate on real blocks would have been
> peripheral and not have affected the understanding of the system.
This is a variant of the "Triviality of Transduction (& A/D, & D/A,
and Effectors)" Argument (TT) that I've responded to in another
iteration. In brief, it's toy problems like SHRDLU that are trivial.
The complete translatability of internal symbolic descriptions into
the objects they stand for (and the consequent partitioning of
the substantive symbolic module and the trivial nonsymbolic
peripherals) may simply break down, as I predict, for life-size
problems approaching the power to pass the Total Turing Test.
To put it another way: There is a conjecture implicit in the solutions
to current toy/microworld problems, namely, that something along
essentially the same lines will suitably generalize to the
grown-up/macroworld problem. What I'm saying amounts to a denial of
that conjecture, with reasons. It is not a reply to me to simply
restate the conjecture.
> Also, all systems take analog input and give analog output. Most receive
> finger pressure on keys and return directed streams of ink or electrons.
> It may be that a robot would need more "immediate" (as opposed to
> conventional) representations, but it's neither necessary nor sufficient
> to be a robot to have those representations.
The problem isn't marrying symbolic systems to any old I/O. I claim
that minds are "dedicated" systems of a particular kind: The kind
capable of passing the Total Turing Test. That's the only necessity and
sufficiency in question.
And again, the mysterious word "immediate" doesn't help. I've tried to
make a specific proposal, and I've accepted the consequences, namely, that it's
just not going to be a "conventional" marriage at all, between a (substantive)
symbolic module and a (trivial) nonsymbolic module, but rather a case of
miscegenation (or a sex-change operation, or some other suitably mixed
metaphor). The resulting representational system will be grounded "bottom-up"
in nonsymbolic function (and will, I hope, display the characteristic
"hybrid vigor" that our current pure-bred symbolic and nonsymbolic processes
lack), as I've proposed (nonmetaphorically) in the papers under discussion.
Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
End of AIList Digest
********************
∂27-Oct-86 0331 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #237
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 27 Oct 86 03:31:46 PST
Date: Sun 26 Oct 1986 22:26-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #237
To: AIList@SRI-STRIPE
AIList Digest Monday, 27 Oct 1986 Volume 4 : Issue 237
Today's Topics:
Philosophy - Harnad's Replies to Krulwich and Paul &
Turing Test & Symbolic Reasoning
----------------------------------------------------------------------
Date: Sun, 26 Oct 86 11:45:17 est
From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad)
Subject: For posting on mod.ai 3rd of 4 (reply to Krulwich)
In mod.ai, Message-ID: <8610190504.AA08083@ucbvax.Berkeley.EDU>,
17 Oct 6 17:29:00 GMT, KRULWICH@C.CS.CMU.EDU (Bruce Krulwich) writes:
> i disagree...that symbols, and in general any entity that a computer
> will process, can only be dealt with in terms of syntax. for example,
> when i add two integers, the bits that the integers are encoded in are
> interpreted semantically to combine to form an integer. the same
> could be said about a symbol that i pass to a routine in an
> object-oriented system such as CLU, where what is done with
> the symbol depends on it's type (which i claim is it's semantics)
Syntax is ordinarily defined as formal rules for manipulating physical
symbol tokens in virtue of their (arbitrary) SHAPES. The syntactic goings-on
are semantically interpretable, that is, the symbols are also
manipulable in virtue of their MEANINGS, not just their shapes.
Meaning is a complex and ill-understood phenomenon, but it includes
(1) the relation of the symbols to the real objects they "stand for" and
(2) a subjective sense of understanding that relation (i.e., what
Searle has for English and lacks for Chinese, despite correctly
manipulating its symbols). So far the only ones who seem to
do (1) and (2) are ourselves. Redefining semantics as manipulating symbols
in virtue of their "type" doesn't seem to solve the problem...
> i think that the reason that computers are so far behind the
> human brain in semantic interpretation and in general "thinking"
> is that the brain contains a hell of a lot more information
> than most computer systems, and also the brain makes associations
> much faster, so an object (ie, a thought) is associated with
> its semantics almost instantly.
I'd say you're pinning a lot of hopes on "more" and "faster." The
problem just might be somewhat deeper than that...
Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: Sun, 26 Oct 86 11:59:28 est
From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad)
Subject: For posting on mod.ai, 4th of 4 (reply to Danny Paul)
Topic: Machines: Natural and Man-Made
On mod.ai, in Message-ID: <8610240550.AA15402@ucbvax.Berkeley.EDU>,
22 Oct 86 14:49:00 GMT, NGSTL1::DANNY%ti-eg.CSNET@RELAY.CS.NET (Daniel Paul)
cites Daniel Simon's earlier reply in AI digest (V4 #226):
>One question you haven't addressed is the relationship between intelligence and
>"human performance". Are the two synonymous? If so, why bother to make
>artificial humans when making natural ones is so much easier (not to mention
>more fun)?
Daniel Paul then adds:
> This is a question that has been bothering me for a while. When it
> is so much cheaper (and possible now, while true machine intelligence
> may be just a dream) why are we wasting time training machines when we
> could be training humans instead? The only reasons that I can see are
> that intelligent systems can be made small enough and light enough to
> sit on bombs. Are there any other reasons?
Apart from the two obvious ones -- (1) so machines can free people to do
things machines cannot yet do, if people prefer, and (2) so machines can do
things that people can only do less quickly and efficiently, if people
prefer -- there is the less obvious reply already made to Daniel
Simon: (3) because trying to get machines to display all our performance
capacity (the Total Turing Test) is our only way of arriving at a functional
understanding of what kinds of machines we are, and how we work.
[Before the cards and letters pour in to inform me that I've used
"machine" incoherently: A "machine," (writ large, Deus Ex Machina) is
just a physical, causal system. Present-generation artificial machines
are simply very primitive examples.]
Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 23 Oct 86 15:39:08 GMT
From: husc6!rutgers!princeton!mind!harnad@eddie.mit.edu (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
michaelm@bcsaic.UUCP (michael maxwell) writes:
> I believe the Turing test was also applied to orangutans, although
> I don't recall the details (except that the orangutans flunked)...
> As an interesting thought experiment, suppose a Turing test were done
> with a robot made to look like a human, and a human being who didn't
> speak English-- both over a CCTV, say, so you couldn't touch them to
> see which one was soft, etc. What would the robot have to do in order
> to pass itself off as human?
They should all three in principle have a chance of passing. For the orang,
we would need to administer the ecologically valid version of the
test. (I think we have reasonably reliable cross-species intuitions
about mental states, although they're obviously not as sensitive as
our intraspecific ones, and they tend to be anthropocentric and
anthropomorphic -- perhaps necessarily so; experienced naturalists are
better at this, just as cross-cultural ethnographic judgments depend on
exposure and experience.) We certainly have no problem in principle with
foreign speakers (the remarkable linguist, polyglot and bible-translator
Kenneth Pike has a "magic show" in which, after less than an hour of "turing"
interactions with a speaker of any of the [shrinking] number of languages he
doesn't yet know, they are babbling mutually intelligibly before your very
eyes), although most of us may have some problems in practice with such a
feat, at least, without practice.
Severe aphasics and mental retardates may be tougher cases, but there
perhaps the orang version would stand us in good stead (and I don't
mean that disrespectfully; I have an extremely high regard for the mental
states of our fellow creatures, whether human or nonhuman).
As to the robot: Well that's the issue here, isn't it? Can it or can it not
pass the appropriate total test that its appropriate non-robot counterpart
(be it human or ape) can pass? If so, it has a mind, by this criterion (the
Total Turing Test). I certainly wouldn't dream of flunking either a human or
a robot just because he/it didn't feel soft, if his/its total performance
was otherwise turing indistinguishable.
Stevan Harnad
princeton!mind!harnad
harnad%mind@princeton.csnet
------------------------------
Date: 23 Oct 86 14:52:56 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa
Subject: Re: extended Turing test
colonel@sunybcs.UUCP (Col. G. L. Sicherman) writes:
> [I]t's misleading to propose that a veridical model of ←our← behavior
> ought to have our "performance capacities"...I do not (yet) quarrel
> with the principle that the model ought to have our abilities. But to
> speak of "performance capacities" is to subtly distort the fundamental
> problem. We are not performers!
"Behavioral ability"/"performance capacity" -- such fuss over
black-box synonyms, instead of facing the substantive problem of
modeling the functional substrate that will generate them.
------------------------------
Date: 24 Oct 86 19:02:42 GMT
From: spar!freeman@decwrl.dec.com
Subject: Re: Searle, Turing, Symbols, Categories
Possibly a more interesting test would be to give the computer
direct control of the video bit map and let it synthesize an
image of a human being.
------------------------------
Date: Fri, 24 Oct 86 22:54:58 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: turing test
PHayes@SRI-KL.ARPA (Pat Hayes) writes:
> Daniel R. Simon has worries about the Turing test. A good place to find
> intelligent discussion of these issues is Turings original article in MIND,
> October 1950, v.59, pages 433 to 460.
That article was in part a response to G. Jefferson's Lister Oration,
which appeared as "The mind of mechanical man" in the British Medical
Journal for 1949 (pp. 1105-1121). It's well worth reading in its own
right. Jefferson presents the humane issues at least as well as Turing
presents the scientific issues, and I think that Turing failed to
rebut, or perhaps to comprehend, all Jefferson's objections.
------------------------------
Date: Fri, 24 Oct 86 18:09 CDT
From: PADIN%FNALB.BITNET@WISCVM.WISC.EDU
Subject: THE PSEUDOMATH OF THE TURING TEST
LETS PUT THE TURING TEST INTO PSEUDO MATHEMATICAL TERMS.
DEFINE THE SET Q={question1,question2,...}. LETS NOTE THAT
FOR EACH q IN Q, THERE IS AN INFINITE NUMBER OF RESPONSES ( THE RESPONSES
NEED NOT BE RELEVANT TO THE QUESTION, THEY JUST NEED TO BE RESPONSES).
IN FACT, WE CAN DEFINE A SET R={EVERY POSSIBLE RESPONSE TO ANY QUESTION}, i.e.,
R={r1,r2,r3,...}.
WE CAN DEFINE THE TURING TEST AS A FUNCTION T THAT MAPS QUESTIONS
q in Q TO A SET RR IN R OF ALL RESPONSES ( i.e., RR IS A SUBSET OF R).
WE CAN THEN WRITE
T(q) --> RR
WHICH STATES THAT THERE EXISTS A FUNCTION T THAT MAPS A QUESTION q TO A
SET OF RESPONSES RR. THE EXISTENCE OF T FOR ALL QUESTIONS q IS EVIDENCE FOR
THE PRESENCE OF MIND SINCE T CHOOSES, OUT OF AN INFINITE NUMBER OF RESPONSES,
THOSE RESPONSES THAT ARE APPROPRIATE TO AN ENTITY WITH A MIND.
NOTE: T IS THE SET
{(question1,{resp1-1,resp2-1,...,respn-1}),
(question2,{resp1-2,resp2-2,...,respk-2}),
...
(questionj,{resp1-j,resp2-j,...,respj-h}),
}
WE USE A SET (RR) OF RESPONSES BECAUSE THERE ARE ,FOR MOST QUESTIONS, MORE
THEN ONE RESPONSE. THERE ARE TIMES OF COURSE WHEN THERE IS JUST ONE ELEMENT IN
RR, SUCH AS, THE RESPONSE TO THE QUESTION, 'IS IT RAINING OUTSIDE?'.
NOW A PROBLEM ARRISES: WHO IS TO DECIDE WHICH SUBSET OF RESPONSES INDICATES
THE EXISTENCE OF MIND? WHO WILL DECIDE WHICH SET IS APPROPRIATE TO INDICATE
AN ENTITY OTHER THAN OURSELVES IS OUT THERE RESPONDING?
FOR EXAMPLE, IF WE DEFINE THE SET RR AS
RR={r(i) | r(i) is randomly chosen from R}
THEN TO EACH QUESTION q IN THE SET OF QUESTIONS USED TO DETERMINE THE EXISTENCE
OF MIND, WE GET A RESPONSE WHICH APPEARS TO BE RANDOM, THAT IS , WE CAN MAKE NO
SENSE OF THE RESPONSE WITH RESPECT TO THE QUESTION ASKED. IT WOULD SEEM THAT
THIS WOULD BE SUFFICIENT TO LABEL TO RESPONDENT A MINDLESS ENTITY. HOWEVER,
IT IS THE EXACT RESPONSE ONE WOULD EXPECT OF A SCHIZOPHRENIC. NOW WHAT DO WE
DO? DO WE CHOSE TO DEFINE SCHIZOPHRENICS AS MINDLESS PEOPLE? THIS IS NOT
MORALLY PALATABLE. DO WE CHOSE TO ALLOW THE 'RANDOM SET' TO BE USED AS
CRITERIA FOR ASSESSING THE QUALITY OF MINDEDNESS? THIS CHOICE IS NOT
ACCEPTABLE EITHER BECAUSE IT SIMPLY RESULTS IN WHAT MAY BE CALLED TURING NOISE,
YIELDING NO USEFUL INFORMATION.
IF WE ARE UNWILLING TO ACCEPT ANOTHER'S DECISION AS TO THE SET OF
ACCEPTABLE RESPONSES, THEN WE ARE COMPELLED TO DO THE DETERMINATION OURSELVES.
NOW IF WE ARE TO USE OUR JUDGEMENT IN DETERMINING THE PRESENCE OF ANOTHER MIND,
THEN WE MUST ACCEPT THE POSSIBILITY OF ERROR INHERENT IN THE HUMAN DECISION
MAKING PROCESS. AT BEST,THEN, THE TURING TEST WILL BE ABLE TO GIVE US ONLY A
HINT AT THE PRESENCE OF ANOTHER MIND; A LEVEL OF PROBABILITY.
------------------------------
Date: 26 Oct 86 20:56:29 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
freeman@spar.UUCP (Jay Freeman) replies:
> Possibly a more interesting test [than the robotic version of
> the Total Turing Test] would be to give the computer
> direct control of the video bit map and let it synthesize an
> image of a human being.
Manipulating digital "images" is still only symbol-manipulation. It is
(1) the causal connection of the transducers with the objects of the
outside world, including (2) any physical "resemblance" the energy
pattern on the transducers may have to the objects from which they
originate, that distinguishes robotic functionalism from symbolic
functionalism and that suggests a solution to the problem of grounding
the otherwise ungrounded symbols (i.e., the problem of "intrinsic vs.
derived intentionality"), as argued in the papers under discussion.
A third reason why internally manipulated bit-maps are not a new way
out of the problems with the symbolic version of the turing test is
that (3) a model that tries to explain the functional basis of our
total performance capacity already has its hands full with anticipating
and generating all of our response capacities in the face of any
potential input contingency (i.e., passing the Total Turing Test)
without having to anticipate and generate all the input contingencies
themselves. In other words, its enough of a problem to model the mind
and how it interacts successfully with the world without having to
model the world too.
Stevan Harnad
{seismo, packard, allegra} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
End of AIList Digest
********************
∂27-Oct-86 0524 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #238
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 27 Oct 86 05:24:39 PST
Date: Sun 26 Oct 1986 22:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #238
To: AIList@SRI-STRIPE
AIList Digest Monday, 27 Oct 1986 Volume 4 : Issue 238
Today's Topics:
Seminars - Toward Meta-Level Problem Solving (CMU) &
Diagnosing Multiple Faults (SU) &
Using Scheme for Discrete Simulation (SMU) &
Ramification and Qualification in the Blocks World (SU) &
Knowledge Programming using Functional Representations (SRI),
Conference - AAAI Workshop on Uncertainty in AI, 1987
----------------------------------------------------------------------
Date: 22 October 1986 1027-EDT
From: Elaine Atkinson@A.CS.CMU.EDU
Subject: Seminar - Toward Meta-Level Problem Solving (CMU)
SPEAKER: Prof. Kurt VanLehn, Psychology Dept., CMU
TITLE: "Towards meta-level problem solving"
DATE: Thursday, October 23
TIME: 4:00 p.m.
PLACE: Adamson Wing, Baker Hall
ABSTRACT: This talk presents preliminary evidence for a new model
of procedure following. Following a mentally held procedure is
a common activity. It takes about 12 procedures to fill an order
at McDonalds. Perhaps 50,000 procedures are followed daily in
running an aircraft carrier. Despite its ubiquity and economic
importance, little is known about procedure following. The folk
model is that people have an interpreter, similar to the
interpreters of Lisp, OPS5 or ACT*. The most common interpreters
in cognitive science are hierarchical, in that they employ a
goal stack or a goal tree as part of their temporary state. A
new model of procedure following will be sketched based on the
idea that procedure following is meta-level problem solving.
The problem is to get a procedure to execute. The operators
do things like set goals, pop them, etc. The state descriptions
are things like "goal1 is more recent than goal2." Different
problem spaces correspond to different interpreters: the goal
stack, goal tree and goal agenda are three different meta-level
problem spaces. We present data based on protocols from 25
subjects executing procedures that show that (1) different
subjects have different interpreters (stack and agenda are the
most common) and (2) some subjects change interpretation
strategy in the midst of execution. Although these data
do not unequivocally refute the folk model of procedure following,
they receive a simpler, more elegant interpretation under the
meta-level problem solving model.
------------------------------
Date: Thu, 23 Oct 86 15:32:19 pdt
From: Premla Nangia <pam@su-whitney.ARPA>
Subject: Seminar - Diagnosing Multiple Faults (SU)
Speaker: Johan de Kleer
Intelligent Systems Laboratory
Xerox
Palo Alto
Title: Diagnosing Multiple Faults
Time: 4.15 p.m.
Place: Cedar Hall Conference Room
Diagnostic tasks require determining the differences between a
model of an artifact and the artifact itself. The differences between
the manifested behavior of the artifact and the predicted behavior of
the model guide the search for the differences between the artifact and
its model. The diagnostic procedure presented in this paper is
model-based, inferring the behavior of the composite device from
knowledge of the structure and function of the individual components
comprising the device. The system (GDE --- General Diagnostic Engine)
has been implemented and tested on examples in the domain of
troubleshooting digital circuits.
This research makes several novel contributions: First, the system
diagnoses failures due to multiple faults. Second, failure candidates
are represented and manipulated in terms of minimal sets of violated
assumptions, resulting in an efficient diagnostic procedure. Third, the
diagnostic procedure is incremental, exploiting the iterative nature of
diagnosis. Fourth, a clear separation is drawn between diagnosis and
behavior prediction, resulting in a domain (and inference procedure)
independent diagnostic procedure. Fifth, GDE combines model-based
prediction with sequential diagnosis to propose measurements to localize
the faults. The usually required conditional probabilities are computed
from the structure of the device and models of its components. This
capability results from a novel way of incorporating probabilities and
information theory with the context mechanism provided by
Assumption-Based Truth Maintenance.
------------------------------
Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Seminar - Using Scheme for Discrete Simulation (SMU)
Using Scheme for Discrete Simulation
Edward E. Ferguson, Texas Instruments,
Location 315 Sic, Time 2PM
Scheme is a lexically-scoped dialect of LISP that gives the programmer
access to continuations, a fundamental capability upon which general
control structures can be built. This presentation will show how continuations
can be used to extend Scheme to have the basic features of a discrete
simulation language. Topics that will be covered include discrete
simulation techniques, addition of simulation capability to a general-purpose
language, why Scheme is a good base language for simulation, and the
complete Scheme text for a simulation control package.
------------------------------
Date: 24 Oct 86 1704 PDT
From: Vladimir Lifschitz <VAL@SAIL.STANFORD.EDU>
Subject: Seminar - Ramification and Qualification in the Blocks World
(SU)
RAMIFICATION AND QUALIFICATION IN THE BLOCKS WORLD
Matt Ginsberg
David Smith
Thursday, October 30, 4pm
MJH 252
In this talk, we discuss the need to infer properties of actions
from general domain information. Specifically, we discuss the
need to deduce the indirect consequences of actions (the
ramification problem), and the need to determine inferentially
under what circumstances a particular action will be blocked
because its successful execution would involve the violation of
a domain constraint (the qualification problem).
We present a formal description of action that addresses these
problems by considering a single model of the domain, and updating
it to reflect the successful execution of actions. The bulk of the
talk will involve the investigation of simple blocks world problems
that existing formalisms have difficulty dealing with, including
the Hanks-McDermott problem, and two new problems that we describe
as "the dumbbell and the pulley".
------------------------------
Date: Fri 24 Oct 86 08:31:01-PDT
From: Margaret Olender <OLENDER@SRI-WARBUCKS.ARPA>
Subject: Seminar - Knowledge Programming using Functional
Representations (SRI)
KNOWLEDGE PROGRAMMING USING FUNCTIONAL REPRESENTATIONS
Tore Risch
Syntelligence
10:00 AM, WEDNESDAY, October 29
SRI International, Building E, Room EJ228
SYNTEL is a novel knowledge representation language that provides
traditional features of expert system shells within a pure functional
programming paradigm. However, it differs sharply from existing
functional languages in many ways, ranging from its ability to deal
with uncertainty to its evaluation procedures. A very flexible
user-interface facility, tightly integrated with the SYNTEL
interpreter, gives the knowledge engineer full control over both form
and content of the end-user system. SYNTEL executes in both LISP
machine and IBM mainframe/workstation environments, and has been used
to develop large knowledge bases dealing with the assessment of
financial risks. This talk will present an overview of its
architecture, as well as describe the real-world problems that
motivated its development.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
P.S. Note change in day and time....
------------------------------
Date: Thu, 23 Oct 86 23:23:40 pdt
From: levitt@ads.ARPA (Tod Levitt)
Subject: AAAI Workshop on Uncertainty in AI, 1987
CALL FOR PARTICIPATION
Third Workshop on: "Uncertainty in Artificial Intelligence"
Seattle, Washington, July 10-12, 1987 (preceeding AAAI conf.)
Sponsored by: AAAI
This is the third annual AAAI workshop on Uncertainty in AI. The
first two workshops have been successful and productive, involving many
of the top researchers in the field. The 1985 workshop proceedings have
just appeared as a book, "Uncertainty in Artificial Intelligence", in
the North-Holland Machine Intelligence and Pattern Recognition series.
The general subject is automated or interactive reasoning under
uncertainty.
This year's emphasis is on the representation and control of
uncertain knowledge. One effective way to make points, display
tradeoffs and clarify issues in representation and control is through
demonstration in applications, so these are especially encouraged,
although papers on theory are also welcome. The workshop provides an
opportunity for those interested in uncertainty in AI to present their
ideas and participate in discussions with leading researchers in the
field. Panel discussions will provide a lively cross-section of views.
Papers are invited on the following topics:
* Applications--including both results and implementation
difficulties; experimental comparison of alternatives
* Knowledge-based and procedural representations of uncertain information
* Uncertainty in model-based reasoning and automated planning
* Learning under uncertainty; theories of uncertain induction
* Heursitics and control in evidentially based systems
* Non-deterministic human-machine interaction
* Uncertain inference procedures
* Other uncertainty in AI issues.
Papers will be carefully reviewed. Space is limited, so
prospective attendees are urged to submit a paper with the intention of
active participation in the workshop. Preference will be given to papers
that have demonstrated their approach in real applications; however,
underlying certainty calculi and reasoning methodologies should be
supported by strong theoretical underpinnings in order to best encourage
discussion on a scientific basis. To allow more time for discussion,
most accepted papers will be included for publication and poster
sessions, but not for presentation.
Four copies of a paper or extended abstract should be sent to
the program chairman by February 10, 1987. Acceptances will be sent by
April 20, 1987. Final (camera ready) papers must be received by May 22,
1987. Proceedings will be available at the workshop.
General Chair: Program Chair: Arrangements Chair:
Peter Cheeseman Tod Levitt Joe Mead
NASA-Ames Research Center Advanced Decision Systems KSC Inc.
Mail Stop 244-7 201 San Antonio Circle 228 Liberty Plaza
Moffett Field, CA 94035 Suite 286 Rome, NY 13440
(415)-694-6526 Mountain View, CA 94040 (315)-336-0500
cheeseman@ames-pluto.arpa (415)-941-3912
levitt@ads.arpa
Program Committee:
P. Bonissone, P. Cheeseman, J. Lemmer, T. Levitt, J. Pearl, R. Yager, L. Zadeh
------------------------------
End of AIList Digest
********************
∂30-Oct-86 0200 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #239
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 30 Oct 86 02:00:45 PST
Date: Wed 29 Oct 1986 22:53-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #239
To: AIList@SRI-STRIPE
AIList Digest Thursday, 30 Oct 1986 Volume 4 : Issue 239
Today's Topics:
Natural Language - Nonsense Quiz,
Humor - Understanding Dogs and Dognition
----------------------------------------------------------------------
Date: 21 Oct 86 17:19:40 PDT (Tuesday)
From: Wedekind.ES@Xerox.COM
Subject: Nonsense quiz
A couple of years ago, on either this list or Human-Nets, there appeared
a short multiple-choice test which was written so that one could deduce
"best" answers based on just the form, not the content, of the questions
(in fact there wasn't much content, since almost every word over 3
letters long was a nonsense word).
If anyone has this test, I would very much like to see it (along with
any "official" answers you may have saved). If you want to see what I
receive (or, better yet, if you have any original questions to add to
the test), just let me know.
thanks,
Jerry
------------------------------
Date: Wed 29 Oct 86 22:48:43-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Subject: Nonsense Quiz
Here's a copy of the quiz taken from Human-Nets. Those interested
in such things should get a copy of
R.M. Balzer, Human Use of World Knowledge, ISI/RR-73-7,
Information Sciences Institute, March 1974, Arpa Order
No. 2223/1.
It contains fairly detailed analysis of text such as "Sooner or later
everyone runs across the problem of pottling something to a sprock
inside the lorch."
Date: 9 Sep 1981
From: research!alice!xchar [ Bell Labs, Murray Hill ]
Reply-to: "research!alice!xchar care of" <CSVAX.upstill at Berkeley>
Subject: test-taking skills
In HUMAN-NETS V4 #37, Greg Woods pointed out that high scores on
multiple-choice tests may (as in his case) reflect highly developed
test-taking skills rather than great intelligence. The test below
illustrates Greg's thesis that one can often make correct choices that
are "not based at all on...knowledge of the subject matter." I got
this test from Joseph Kruskal (Bell Labs), who got it from Clyde
Kruskal (NYU Courant Institute), who got it from Jerome Berkowitz
(Courant Institute). Unfortunately, Prof. Berkowitz is currently out
of town, so I cannot trace its origin any farther back.
I will supply the generally accepted answers, and perhaps some
explanations, later.
--Charlie Harris
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
The following is a hypothetical examination on which you could get
every item correct by knowing some of the pitfalls of test
construction. See how well you can do! (Circle the letter preceding
the correct response.)
1. The purpose of the cluss in furmpaling is to remove
a.cluss-prags c. cloughs
b. tremalis d. plumots
2. Trassig is true when
a. lusps trasses the vom
b. the viskal flans, if the viskal is donwil or zortil
c. the begul
d. dissles lisk easily
3. The sigia frequently overfesks the trelsum because
a. all sigia are mellious
b. sigias are always vortil
c. the reelsum is usually tarious
d. no trelsa are feskable
4. The fribbled breg will minter best with an
a. derst c. sortar
b. morst d. ignu
5. Among the reasons for tristal doss are
a. The sabs foped and the foths tinzed
b. the dredges roted with the orots
c. few racobs were accapted in sluth
d. most of the polats were thonced
6. Which of the following is/are always present
when trossels are being gruven?
a. rint and vost c. shum and vost
b. vost d. vost and plone
7. The mintering function of the ignu is most
effectively carried out in connection with
a. razma tol c. the fribbled breg
b. the grosing stantol d. a frally slush
8. a. c.
b. d.
Date: 15 Sep 1981 15:14:39-PDT
From: ihuxo!hobs at Berkeley (John Hobson)
Reply-to: "ihuxo!hobs in care of" <CSVAX.upstill at Berkeley>
Subject: test-taking skills
Charlie--
The hypothetical exam on test-taking skills that you submitted to
HUMAN-NETS Digest V4 #46 has been an object of much interest here at
Indian Hill. A number of us have taken the test and we would like to
see just how well we did. The answers and reasons for those answers
are as follows:
1. The purpose of the cluss in furmpaling is to remove
a. cluss-prags c. cloughs
b. tremalis d. plumots
1--a. The cluss is mentioned in the question and in the answer.
2. Trassig is true when
a. lusps trasses the vom
b. the viskal flans, if the viskal is donwil or zortil
c. the begul
d. dissles lisk easily
2--a. The word trassig in the question and the verb trasses in the
answer.
3. The sigia frequently overfesks the trelsum because
a. all sigia are mellious
b. sigias are always vortil
c. the reelsum is usually tarious
d. no trelsa are feskable
3--c. The key word here is "usually", along with "frequently" in the
question. Anyway, it is often best to give a non-absolute answer in
case there is an exception.
4. The fribbled breg will minter best with an
a. derst c. sortar
b. morst d. ignu
4--d. The giveaway here is the article "an" since "ignu" is the only
answer staring with a vowel.
5. Among the reasons for tristal doss are
a. The sabs foped and the foths tinzed
b. the dredges roted with the orots
c. few racobs were accapted in sluth
d. most of the polats were thonced
5--a. This is a bit more subtle, but we think that since the question
calls for "reasons" in the plural and (a) is the only answer with more
than one reason, that the answer is (a).
6. Which of the following is/are always present when trossels are
being gruven?
a. rint and vost c. shum and vost
b. vost d. vost and plone
6--b. Vost is mentioned in all possible answers, so vost must always
be present.
7. The mintering function of the ignu is most effectively carried out
in connection with
a. razma tol c. the fribbled breg
b. the grosing stantol d. a frally slush
7--c. Since in question 4 (above), the fribbled breg was mintering
with an ignu, the thing mintering with the ignu is, of course, the
fibbled breg.
8. a. c.
b. d.
We haven't the foggiest. Perhaps "all of the above".
I once took a multiple-guess test in English History where the last
question was:
The only British Prime Minister ever assassinated was:
a. Clement Atlee e. None of the above
b. Spencer Perceval f. One or more of the above
c. The Duke of Wellington g. Don't know
d. All of the above h. Don't care
b, f, g and h were accepted as correct answers.
John Hobson
ihuxo!hobs
Bell Labs -- Indian Hill
Date: 18 Sep 1981 12:13 PDT
From: Kolling at PARC-MAXC
Subject: test-taking skills
About that test.....
I think the answer to 2 is b, not a. Either a or b is possible (not c
because it isn't grammatically correct, and not d because it's fuzzy
due to "easily". Looking at the answers as follows: 1. a 2. a or b
3. c 4. d 5. a 6. b 7. c 8. ? Note the pattern a,b,c,d, so I think 2
is b and 8 is d.
Karen (Now you know how I got through school.)
Date: 29 September 1981 0858-EDT (Tuesday)
From: Mary.Shaw at CMU-10A
Subject: Test-taking skills
I agree with Karen on the answers: a, b, c, d, a, b, c, d. John's
reasons are correct except for #s 2 and 8. Karen is right about 8
(it's the pattern). The reason #2 is b rather than a is that option b
is markedly dissimilar from all the others. (One of the rules of
test-writing is to avoid making the right answer stand out because
it's much longer or shorter than the others, especially if it's longer
because of a qualifying clause as in b here.)
Mary
------------------------------
Date: Tue, 21 Oct 86 21:07:33 PDT
From: cottrell@nprdc.arpa (Gary Cottrell)
Subject: Reply to Winograd and Flores
SEMINAR
Understanding Dogs and Dognition:
A New Foundation for Design
Garrison W. Cottrell
Department of Dog Science
Condominium Community College of Southern California
There is a crisis in Dog-Human relations, as has been
evidenced by recent attempts to make dogs more "user-friendly"
(see Programming the User-Friendly Dog, Cottrell 1985a). A new
approach has appeared (Whineandpoop and Flossy, 1986) that claims
that previous attempts at Dog-Human Interfaces have floundered on
a basic misunderstanding of the Dog. The problem has been that
we have approached the Dog as if he was one of us - and he
certainly is not. Their perusal of the philosophies of
Holedigger and Mateyourauntie has led them to a new
understanding: A West Coast Understanding. There is no Objective
Reality[1] that we form internal representations of, rather,
organisms are structurally coupled[2] to their environment, the
so-called "seamless web" theory of cognition. Thus the
inside/outside dichotomy that has plagued AI researchers and dogs
for years is a false one[3]. This has led them to a whole new
way of understanding how dogs should be programmed.
In the past we have assumed some internal representation in
the dog's head (see Modelling the Intentional Behavior of the
Dog, Cottrell 1984b). In this new view, the reason dogs are so
dense is not that they have impoverished internal
representations, but that they don't have internal
representations. Instead, the dog is structurally coupled to the
world - he moves about embedded in the ooze of the environment,
and naturally, it slows him down. Not only that, but it is the
wrong environment - the human one, leading to continual
breakdown[4]. Thus our problem is in forming a consensual domain
with another species. We have to place ourselves in their domain
to hear them - this is termed "listening in the backyard".
We feel that there is much to be gained from combining their
view with the connectionist approach[5]. The problem is
combining the intensional programming of evolution with
extensional programming by the owner. Connectionist theories of
learning combined with considerations of "listening in the
backyard" suggest that if we simply present the dog with many
examples of the desired input-output behavior within the
backyard, we will get the desired result.
←←←←←←←←←←←←←←←←←←←←
[1]Actually, Californians have known this for years.
[2]Note that this is to be distinguished from the structural
coupling that produces new dogs from old ones.
[3]Dogs have often followed Mateyourauntie in this, ignoring
the inside/outside dichotomy. These considerations may eliminate
the basis for the continence-performance distinction (Hutchins,
1986).
[4]The field of Dog-Machine Interfaces attempts to deal with
such problems as the poor design of the doorknob - a lever would
help reduce the inside/outside barrier. Others feel that this
research is misdirected; the doorknob is designed that way pre-
cisely because it acts as a species filter, keeping dogs out of
restaurants and movie theatres.
[5]Their work also suggests applying the theory of speech acts
to the command interface. Thus, we can classify much more than
simple Directives. For example, "You've had it now, Jellybean!"
is a commissive - the speaker is committed to a future course of
action. The dog will usually respond with an attempt to withdraw
from the dialogue, but the speaker rejects his withdrawal.
"You're in the doghouse, Bean" is a declarative - the speaker
brings about a correspondence between the propositional content
of this and reality simply by uttering it.
P.S. As usual, troff source (1 page laser printer output) on request to:
gary cottrell
Institute for Cognitive Science, UCSD
cottrell@nprdc (ARPA)
{ucbvax,decvax,akgua,dcdwest}!sdcsvax!sdics!cottrell (USENET)
------------------------------
End of AIList Digest
********************
∂30-Oct-86 0420 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #240
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 30 Oct 86 04:20:34 PST
Date: Wed 29 Oct 1986 23:01-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #240
To: AIList@SRI-STRIPE
AIList Digest Thursday, 30 Oct 1986 Volume 4 : Issue 240
Today's Topics:
Queries - PD Parser for Simple English & Public Domain Prolog &
Faculty Compensation & Hierarchical Constraints &
Model-Based Reasoning & Monotonic Reasoning,
Neural Networks - Simulation & Nature of Computation
----------------------------------------------------------------------
Date: 26 Oct 86 19:10:57 GMT
From: trwrb!orion!heins@ucbvax.Berkeley.EDU (Michael Heins)
Subject: Seeking PD parser for simple English sentences.
I am looking for public domain software which I can use to help me parse
simple English sentences into some kind of standardized representation.
I guess what I am looking for would be a kind of sentence diagrammer
which would not have to have any deep knowledge of the meanings of the
nouns, verbs, adjectives, etc.
The application is for a command interface to a computer, for use by
novice users. C routines would be ideal. Also, references to published
algorithms would be useful. Thanks in advance.
--
...!hplabs!sdcrdcf!trwrb!orion!heins
We are a way for the universe to know itself. -- Carl Sagan
------------------------------
Date: 27 Oct 86 14:26:33 GMT
From: ihnp4!drutx!mtuxo!mtune!mtunf!mtx5c!mtx5d!mtx5a!mtx5e!mtx5w!drv@
ucbvax.Berkeley.EDU
Subject: NEED PUBLIC DOMAIN PROLOG
A friend of mine needs a copy of a public domain
Prolog that will run on a VAX 11/780 under Unix.
If such a program exists, please contact me and
I will help make arrangements to get it sent to
him.
Dennis R. Vogel
AT&T Information Systems
Middletown, NJ
(201) 957-4951
------------------------------
Date: Tue, 28 Oct 86 09:31 EST
From: Norm Badler <Badler@cis.upenn.edu>
Subject: request for information
If you are a faculty member or a researcher at a University, I would like
to have a BRIEF response to the following question:
Do you have an "incentive" or "reward" or "benefit" plan that returns
to you some amount of your (external) research money for your own
University discretionary use?
If the answer is NO, that information would be useful. If YES, then a brief
account would be appreciated. If you don't want to type much, send me
your phone number and I will call you for the information.
Thanks very much!
Norm Badler
Badler@cis.upenn.edu
(215)898-5862
------------------------------
Date: Mon, 27 Oct 86 11:47:52 EST
From: Jim Hendler <hendler@brillig.umd.edu>
Subject: seeking info
Subject: seeking info on multi-linear partial orderings
I recently had a paper rejected from a conference that discussed, among other
things, using a set of hierarchical networks for constraint propogation (i.e.
propogating the information through several levels of network simultaneously).
One of the reviewers said "they apply a fairly standard AI technique..."
and I wonder about this. I thought I was up on various constraint propagation
techniques, but wonder if anyone has a pointer to work (preferably in
a qualitative reasoning system) that discusses the use of multi-layer
constraint propogation?
thanks much
Jim Hendler
Ass't Professor
U of Md
College Park, Md. 20742
[I would check into the MIT work (don't have the reference handy, but
some of it's in the two-volume AI: An MIT Perspective) on modeling
electronic circuits. All but the first papers used multiple views of
subsystems to permit propagation of constraints at different granularities.
Subsequent work on electronic fault diagnosis (e.g., Randy Davis) goes
even further. Other work in "pyramidal" parsing (speech, images, line
drawings) has grown from the Hearsay blackboard architecture. -- KIL]
------------------------------
Date: 27 Oct 86 14:13 EST
From: SHAFFER%SCOVCB.decnet@ge-crd.arpa
Subject: Model-base Reasoning
Hello:
I am looking for articles and books which describe the
theory of Model-based Reasoning, MBR. Here at GE we
have an interest in MBR for our next generation of KEE-based
ESEs. I will publish a summary of my findings sometime
in the future. Also, I would be interested in any related
topics which related to MBR and its uses.
Thanks,
Earl Shaffer
GE - VFSC - Bld 100
Po Box 8555
Phila, PA 19101
------------------------------
Date: 28 Oct 86 09:52 EST
From: SHAFFER%SCOVCB.decnet@ge-crd.arpa
Subject: Monotonic Reasoning
I am somewhat new to AI and I am confused about the
definition of "non-monotonic" reasoning, as in the
documentation in Inference's ART system. It say that
features allow for non-monotonic reasoning, but does
not say what that type of reasoning is, or how it
differs from monotonic reasoning, if there is such a
thing.
Earl Shaffer
GE - VFSC
Po box 8555
Phila , Pa 19101
[Monotonic reasoning is a process of logical inference using only
true axioms or statements. Nonmonotonic reasoning uses statements
believed to be true, but which may later prove to be false. It is
therefore necessary to keep track of all chains of support for each
conclusion so that the conclusion can be revoked if its basis
statements are revoked. Other names for nonmonotonic reasoning are
default reasoning and truth maintenance. -- KIL]
------------------------------
Date: 28 Oct 86 21:05:49 GMT
From: uwslh!lishka@rsch.wisc.edu (a)
Subject: Re: simulating a neural network
I just read an interesting short blurb in the most recent BYTE issue
(the one with the graphics board on the cover)...it was in Bytelines or
something. Now, since I skimmed it, my info is probably a little sketchy,
but here's about what it said:
Apparently Bell Labs (I think) has been experimenting with neural
network-like chips, with resistors replacing bytes (I guess). They started
out with about 22 'neurons' and have gotten up to 256 or 512 (can't
remember which) 'neurons' on one chip now. Apparently these 'neurons' are
supposed to run much faster than human neurons...it'll be interesting to see
how all this works out in the end.
I figured that anyone interested in the neural network program might
be interested in the article...check Byte for actual info. Also, if anyone
knows more about this experiment, I would be interested, so please mail me
any information at the below address.
--
Chris Lishka /l lishka@uwslh.uucp
Wisconsin State Lab of Hygiene -lishka%uwslh.uucp@rsch.wisc.edu
\{seismo, harvard,topaz,...}!uwvax!uwslh!lishka
------------------------------
Date: 27 Oct 86 19:50:58 GMT
From: yippee.dec.com!glantz@decwrl.dec.com
Subject: Re: Simulating neural networks
*********************
Another good reference is:
Martin, R., Lukton, A. and Salthe, S.N., "Simulation of
Cognitive Maps, Concept Hierarchies, Learning by Simile, and
Similarity Assessment in Homogeneous Neural Nets," Proceedings
of the 1984 Summer Computer Simulation Conference, Society for
Computer Simulation, vol. 2, 808-821.
In this paper, Martin discusses (among other things) simulating
the effects of neurotransmittors and inhibitors, which can have
the result of generating goal-seeking behavior, which is closely
linked to the ability to learn.
Mike Glantz
Digital Equipment Centre Technique Europe
BP 29 Sophia Antipolis
06561 Valbonne CEDEX
France
My employer is not aware of this message.
*********************
------------------------------
Date: 27 Oct 86 17:36:23 GMT
From: zeus!berke@locus.ucla.edu (Peter Berke)
Subject: Glib "computation"
In article <1249@megaron.UUCP> wendt@megaron.UUCP writes:
>Anyone interested in neural modelling should know about the Parallel
>Distributed Processing pair of books from MIT Press. They're
>expensive (around $60 for the pair) but very good and quite recent.
>
>A quote:
>
>Relaxation is the dominant mode of computation. Although there
>is no specific piece of neuroscience which compels the view that
>brain-style computation involves relaxation, all of the features
>we have just discussed have led us to believe that the primary
>mode of computation in the brain is best understood as a kind of
>relaxation system in which the computation proceeds by iteratively
>seeking to satisfy a large number of weak constraints. Thus,
>rather than playing the role of wires in an electric circuit, we
>see the connections as representing constraints on the co-occurrence
>of pairs of units. The system should be thought of more as "settling
>into a solution" than "calculating a solution". Again, this is an
>important perspective change which comes out of an interaction of
>our understanding of how the brain must work and what kinds of processes
>seem to be required to account for desired behavior.
>
>(Rumelhart & Mcclelland, Chapter 4)
>
Isn't 'computation' a technical term? Do R&Mc prove that PDP is
equivalent to computation? Would Turing agree that "settling into
a solution" is computation? Some people have tried to show that
symbols and symbol processing can be represented in neural nets,
but I don't think anyone has proved anything about the problems
they purportedly "solve," at least not to the extent that Turing
did for computers in 1936, or Church in the same year for lambda
calculus.
Or are R&Mc using 'computing' to mean 'any sort of machination whatever'?
And is that a good idea?
Church's Thesis, that computing and lambda-conversion (or whatever he
calls it) are both equivalent to what we might naturally consider
calcuable could be extended to say that neural nets "settle" into
the same solutions for the same class of problems. Or, one could
maintain, as neural netters tend to implicitly, that "settling" into
solutions IS what we might naturally consider calculable, rather than
being merely equivalent to it. These are different options.
The first adds "neural nets" to the class of formalisms which can
express solutions equivalent to each other in "power," and is thus
a variant on Church's thesis. The second actually refutes Church's
Thesis, by saying this "settling" process is clearly defined and
that it can realize a different (or non-comparable) class of problems,
in which case computation would not be (provably) equivalent to it.
Of course, if we could show BOTH that:
(1) "settling" is equivalent to "computing" as formally defined by Turing,
and (2) that "settling" IS how brains work,
then we'd have a PROOF of Church's Thesis.
Until that point it seems a bit misleading or misled to refer to
"settling" as "computation."
Peter Berke
------------------------------
End of AIList Digest
********************
∂30-Oct-86 0724 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #241
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 30 Oct 86 07:24:04 PST
Date: Wed 29 Oct 1986 23:10-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #241
To: AIList@SRI-STRIPE
AIList Digest Thursday, 30 Oct 1986 Volume 4 : Issue 241
Today's Topics:
Philosophy & Physics - Analog/Digital Distinction
----------------------------------------------------------------------
Date: 27 Oct 86 06:08:33 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov
Subject: Re: Defining the Analog/Digital Distinction
Tom Dietterich (orstcs!tgd) responds as follows to my challenge to
define the A/D distinction:
> In any representation, certain properties of the representational
> medium are exploited to carry information. Digital representations
> tend to exploit fewer properties of the medium. For example, in
> digital electronics, a 0 could be defined as anything below .2 volts
> and a 1 as anything above 4 volts. This is a simple distinction.
> An analog representation of a signal (e.g., in an audio amplifier)
> requires a much finer grain of distinctions--it exploits the
> continuity of voltage to represent, for example, the loudness
> of a sound.
So far so good. Analog representations "exploit" more of the properties
(e.g., continuity) of the "representational" (physical?) medium to carry
information. But then is the difference between an A and a D representation
just that one is more (exploitative) and the other less? Is it not rather that
they carry information and/or represent in a DIFFERENT WAY? In what
does that difference consist? (And what does "exploit" mean? Exploit
for whom?)
> A related notion of digital and analog can be obtained by considering
> what kinds of transformations can be applied without losing
> information. Digital signals can generally be transformed in more
> ways--precisely because they do not exploit as many properties of the
> representational medium. Hence, if we add .1 volts to a digital 0 as
> defined above, the result will either still be 0 or else be undefined
> (and hence [un]detectable). A digital 1 remains unchanged under
> addition of .1 volts. However, the analog signal would be
> changed under ANY addition of voltage.
"Preserving information under transformations" also sounds like a good
candidate. But it seems to me that preservation-under-transformation
is (or ought to be) a two-way street. Digital representations may be
robust within their respective discrete boundaries, but it hardly
sounds information-preserving to lose all the information between .2
volts and 4 volts. I would think that the invertibility of analog
transformations might be a better instance of information preservation than
the irretrievable losses of A/D. And this still seems to side-step the
question of WHAT information is preserved, and in what way, by analog
and digital representations, respectively. And should we be focusing on
representations in this discussion, or on transformations (A/A, A/D,
D/D, D/A)? Finally, what is the relation between a digital
representation and a symbolic representation?
Please keep those definitions coming.
Stevan Harnad
{allegra, bellcore, seismo, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 27 Oct 86 11:53:06 est
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: The Analog/Digital Distinction
Date: 23 Oct 86 17:20:00 GMT
From: hp-pcd!orstcs!tgd@hplabs.hp.com (tgd)
Here is a rough try at defining the analog vs. digital distinction.
[ * * * ]
I don't read all the messages on AiList, so I may have missed
something here: but isn't ``analog vs digital'' the same thing as
``continuous vs discrete''? Continuous vs discrete, in turn, can be
defined in terms of infinite vs finite partitionability. It's a
property of the measuring system, not a property of the thing being
measured.
------------------------------
Date: 27 Oct 86 15:29:01 GMT
From: alice!jj@ucbvax.Berkeley.EDU
Subject: Re: Defining the Analog/Digital Distinction
> From allegra!princeton!mind!harnad Wed Dec 31 19:00:00 1969
>
>
> Tom Dietterich (orstcs!tgd) responds as follows to my challenge to
> define the A/D distinction:
>
> > In any representation, certain properties of the representational
> > ...
> > of a sound.
>
> So far so good. Analog representations "exploit" more of the properties
> ...
> for whom?)
>
> > A related notion of digital and analog can be obtained by considering
> > ...
> > changed under ANY addition of voltage.
>
> "Preserving information under transformations" also sounds like a good
> ...
> representation and a symbolic representation?
>
> Please keep those definitions coming.
>
> Stevan Harnad
What a pleasant little bit of sophistry. Mr. Harnad asks for a defination
of "digital" and "analog", both words used in a precise way in a particular
literature. He also asks that we do not use other words used in that literature
to write the defination.
In other words, we are asked to define something precisely, in a languange
that does not have precise values.
I suggest the first chapter of Rabiner and Gold, all of Wozencraft and Jacobs,
and perhaps a good general text on signal processing for starters. That will
define the language. Then the defination can be made.
Philosophy is wonderful, it doesn't have to have anything to do
with reality.
--
WOBEGON WON'T BE GONE, TEDDY BEAR PICNIC AT 11.
"If you love that Politician, use your Loo, use your Loo"
(ihnp4;allegra;research)!alice!jj
------------------------------
Date: 27 Oct 86 22:06:14 GMT
From: husc6!Diamond!aweinste@eddie.mit.edu (Anders Weinstein)
Subject: Re: The Analog/Digital Distinction: Soliciting Definitions
Philosopher Nelson Goodman has distinguishes analog from digital symbol systems
in his book ←Languages←of←Art←. The context is a technical investigation into
the peculiar features of ←notational← systems in the arts; that is, systems
like musical notation which are used to DEFINE a work of art by dividing the
instances from the non-instances.
The following excerpts contain the relevant definitions: (Warning--I've left
out a lot of explanatory text and examples for brevity)
The second requirement upon a notational scheme, then, is that the
characters be ←finitely←differentiated←, or ←articulate←. It runs: For
every two characters K and K' and every mark m that does not belong to
both, determination that m does not belong to K or that m does not belong
to K' is theoretically possible. ...
A scheme is syntactically dense if it provides for infinitely many
characters so ordered that between each two there is a third. ... When no
insertion of other characters will thus destroy density, a scheme has no
gaps and may be called ←dense←throughout←. In what follows, "throughout" is
often dropped as understood... [in footnote:] I shall call a scheme that
contains no dense subscheme "completely discontinuous" or "discontinuous
throughout". ...
The final requirement [including others not quoted here] for a notational
system is semantic finite differentiation; that is for every two characters
K and K' such that their compliance classes are not identical and every
object h that does not comply with both, determination that h does not
comply with K or that h does not comply with K' must be theoretically
possible.
[defines 'semantically dense throughout' and 'semantically discontinuous'
to parallel the syntactic definitions].
And his analog/digital distinction:
A symbol ←scheme← is analog if syntactically dense; a ←system← is analog if
syntactically and semantically dense. ... A digital scheme, in contrast, is
discontinuous throughout; and in a digital system the characters of such a
scheme are one-one correlated with compliance-classes of a similarly
discontinous set. But discontinuity, though implied by, does not imply
differentiation...To be digital, a system must be not merely discontinuous
but ←differentiated← throughout, syntactically and semantically...
If only thoroughly dense systems are analog, and only thoroughly
differentiated ones are digital, many systems are of neither type.
To summarize: when a dense language is used to represent a dense domain, the
system is analog; when a discrete (Goodman's "discontinuous") and articulate
language maps a discrete and articulate domain, the system is digital.
Note that not all discrete languages are "articulate" in Goodman's sense:
Consider a language with only two characters, one of which contains all
straight marks not longer than one inch and the other of which contains all
longer marks. This is discrete but not articulate, since no matter how
precise our tests become, there will always be a mark (infinitely many, in
fact) that cannot be judged to belong to one or the other character.
For more explanation, consult the source directly (and not me).
Anders Weinstein <aweinste@DIAMOND.BBN.COM>
------------------------------
Date: 28 Oct 86 04:20:07 GMT
From: allegra!princeton!mind!harnad@ucbvax.Berkeley.EDU
Subject: The Analog/Digital Distinction
Steven R. Jacobs (utah-cs!jacobs) of the University of Utah CS Dept
has given me permission to post his contribution to defining the A/D
distinction. It appears below, followed at the very end by some comments
from me.
[Will someone with access please post a copy to sci.electronics?]
>> One prima facie non-starter: "continuous" vs. "discrete" physical processes.
>I apologize if this was meant to avoid discussion of continuous/discrete
>issues relating to analog/digital representations. I find it difficult
>to avoid talking in terms of "continuous" and "discrete" processes when
>discussing the difference between analog and digital signals. I am
>approaching the question from a signal processing point of view, so I
>tend to assume that "real" signals are analog signals, and other methods
>of representing signals are used as approximations of analog signals (but
>see below about a physicist's perspective). Yes, I realize you asked for
>objective definitions. For my own non-objective convenience, I will use
>analog signals as a starting point for obtaining other types of signals.
>This will assist in discussing the operations used to derive non-analog
>signals from analog signals, and in discussing the effects of the operations
>on the mathematics involved when manipulating the various types of signals
>in the time and frequency domains.
>
>The distinction of continuous/discrete can be applied to both the amplitude
>and time axes of a signal, which allows four types of signals to be defined.
>So, some "loose" definitions:
>
>Analog signal -- one that is continuous both in time and amplitude, so that
> the amplitude of the signal may change to any amplitude at any time.
> This is what many electrical engineers might describe as a "signal".
>
>Sampled signal -- continuous in amplitude, discrete in time (usually with
> eqully-spaced sampling intervals). Signal may take on any amplitude,
> but the amplitude changes only at discrete times. Sampled signals
> are obtained (obviously?) by sampling analog signals. If sampling is
> done improperly, aliasing will occur, causing a loss of information.
> Some (most?) analog signals cannot be accurately represented by a
> sampled signal, since only band-limited signals can be sampled without
> aliasing. Sampled signals are the basis of Digital Signal Processing,
> although digital signals are invariably used as an approximation of
> the sampled signals.
>
>Quantized signal -- piece-wise continuous in time, discrete in amplitude.
> Amplitude may change at any time, but only to discrete levels. All
> changes in amplitude are steps.
>
>Digital signal -- one that is discrete both in time and amplitude, and may
> change in (discrete) amplitude only at certain (discrete, usually
> uniformly spaced) time intervals. This is obtained by quantizing
> a sampled signal.
>
>Other types of signals can be made by combining these "basic" types, but
>that topic is more appropriate for net.bizarre than for sci.electronics.
>
>The real distinction (in my mind) between these representations is the effect
>the representation has on the mathematics required to manipulate the signals.
>
>Although most engineers and computer scientists would think of analog signals
>as the most "correct" representations of signals, a physicist might argue that
>the "quantum signal" is the only signal which corresponds to the real world,
>and that analog signals are merely a convenient approximation used by
>mathematicians.
>
>One major distinction (from a mathematical point of view) between sampled
>signals and analog signals can be best visualized in the frequency domain.
>A band-limited analog signal has a Fourier transform which is finite. A
>sampled representation of the same signal will be periodic in the Fourier
>domain. Increasing the sampling frequency will "spread out" the identical
>"clumps" in the FT (fourier transform) of a sampled signal, but the FT
>of the sampled signal will ALWAYS remain periodic, so that in the limit as
>the sampling frequency approaches infinity, the sampled signal DOES NOT
>become a "better" approximation of the analog signal, they remain entirely
>distinct. Whenever the sampling frequency exceeds the Nyquist frequency,
>the original analog signal can be exactly recovered from the sampled signal,
>so that the two representations contain the equivalent information, but the
>two signals are not the same, and the sampled signal does not "approach"
>the analog signal as the sampling frequency is increased. For signals which
>are not band-limited, sampling causes a loss of information due to aliasing.
>As the sampling frequency is increased, less information is lost, so that the
>"goodness" of the approximation improves as the sampling frequency increases.
>Still, the sampled signal is fundamentally different from the analog signal.
>This fundamental difference applies also to digital signals, which are both
>quantized and sampled.
>
>Digital signals are usually used as an approximation to "sampled" signals.
>The mathematics used for digital signal processing is actually only correct
>when applied to sampled signals (maybe it should be called "Sampled Signal
>Processing" (SSP) instead). The approximation is usually handled mostly by
>ignoring the "quantization noise" which is introduced when converting a
>sampled analog signal into a digital signal. This is convenient because it
>avoids some messy "details" in the mathematics. To properly deal with
>quantized signals requires giving up some "nice" properties of signals and
>operators that are applied to signals. Mostly, operators which are applied
>to signals become non-commutative when the signals are discrete in amplitude.
>This is very much related to the "Heisenburg uncertainty principle" of
>quantum mechanics, and to me represents another "true" distinction between
>analog and digital signals. The quantization of signals represents a loss of
>information that is qualitatively different from any loss of information that
>occurs from sampling. This difference is usally glossed over or ignored in
>discussions of signal processing.
>
>Well, those are some half-baked ideas that come to my mind. They are probably
>not what you are looking for, so feel free to post them to /dev/null.
>
>Steve Jacobs
>
- - - - - - - - - - - - - - - - - - - - - - - -
REPLY:
> I apologize if this was meant to avoid discussion of continuous/discrete
> issues relating to analog/digital representations.
It wasn't meant to avoid discussion of continuous/discrete at all;
just to avoid a simple-minded equation of C/D with A/D, overlooking
all the attendant problems of that move. You certainly haven't done that
in your thoughtful and articulate review and analysis.
> I tend to assume that "real" signals are analog signals, and other
> methods of representing signals are used as approximations of analog
> signals.
That seems like the correct assumption. But if we shift for a moment
from considering the A or D signals themselves and consider instead
the transformation that generated them, the question arises: If "real"
signals are analog signals, then what are they analogs of? Let's
borrow some formal jargon and say that there are (real) "objects,"
and then there are "images" of them under various types of
transformations. One such transformation is an analog transformation.
In that case the image of the object under the (analog) transformation
can also be called an "analog" of the object. Is that an analog signal?
The approximation criterion also seems right on the mark. Using the
object/transformation/image terminology again, another kind of a
transformation is a "digital" transformation. The image of an object
(or of the analog image of an object) under a digital transformation
is "approximate" rather than "exact." What is the difference between
"approximate" and "exact"? Here I would like to interject a tentative
candidate criterion of my own: I think it may have something to do with
invertibility. A transformation from object to image is analog if (or
to the degree that) it is invertible. In a digital approximation, some
information or structure is irretrievably lost (the transformation
is not 1:1).
So, might invertibility/noninvertibility have something to do with the
distinction between an A and a D transformation? And do "images" of
these two kinds count as "representations" in the sense in which that
concept is used in AI, cognitive psychology and philosophy (not
necessarily univocally)? And, finally, where do "symbolic"
representations come in? If we take a continuous object and make a
discrete, approximate image of it, how do we get from that to a
symbolic representation?
> Analog signal -- one that is continuous both in time and amplitude.
> Sampled signal -- continuous in amplitude, discrete in time...
> If sampling is done improperly, aliasing will occur, causing a
> loss of information.
> Quantized signal -- piece-wise continuous in time, discrete in
> amplitude.
> Digital signal -- one that is discrete both in time and amplitude...
> This is obtained by quantizing a sampled signal.
Both directions of departure from the analog, it seems, lose
information, unless the interpolations of the gaps in either time or
amplitude can be accurately made somehow. Question: What if the
original "object" is discrete in the first place, both in space and
time? Does that make a digital transformation of it "analog"? I
realize that this is violating the "signal" terminology, but, after all,
signals have their origins too. Preservation and invertibility of
information or structure seem to be even more general features than
continuity/discreteness. Or perhaps we should be focusing on the
continuity/noncontinuity of the transformations rather than the
objects?
> a physicist might argue that the "quantum signal" is the only
> signal which corresponds to the real world, and that analog
> signals are merely a convenient approximation used by mathematicians.
This, of course, turns the continuous/discrete and the exact/approximate
criteria completely on their heads, as I think you recognize too. And
it's one of the things that makes continuity a less straightforward basis
for the A/D distinction.
> Mostly, operators which are applied to signals become
> non-commutative when the signals are discrete in amplitude.
> This is very much related to the "Heisenburg uncertainty principle"
> of quantum mechanics, and to me represents another "true" distinction
> between analog and digital signals. The quantization of signals
> represents a loss of information that is qualitatively different from
> any loss of information that occurs from sampling.
I'm not qualified to judge whether this is an anolgy or a true quantum
effect. If the latter, then of course the qualitative difference
resides in the fact that (on current theory) the information is
irretrievable in principle rather than merely in practice.
> Well, those are some half-baked ideas that come to my mind.
Many thanks for your thoughtful contribution. I hope the discussion
will continue "baking."
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 27 Oct 86 03:29:00 GMT
From: uiucuxe!goldfain@uxc.cso.uiuc.edu
Subject: Re: The Analog/Digital Distinction: Sol
Analog devices/processes are best viewed as having a continuous possible
range of values. (An interval of the real line, for example.)
Digital devices/processes are best viewed as having an underlying granularity
of discrete possible values. (Representable by a subset of the integers.)
-----------------
This is a pretty good definition, whether you like it or not.
I am curious as to what kind of discussion you are hoping to get, when you
rule out the correct distinction at the outset ...
------------------------------
End of AIList Digest
********************
∂30-Oct-86 1229 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #242
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 30 Oct 86 12:29:00 PST
Date: Wed 29 Oct 1986 23:18-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #242
To: AIList@SRI-STRIPE
AIList Digest Thursday, 30 Oct 1986 Volume 4 : Issue 242
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 27 Oct 86 03:58:54 GMT
From: spar!freeman@decwrl.dec.com
Subject: Re: Searle, Turing, Symbols, Categories
In article <12@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>freeman@spar.UUCP (Jay Freeman) replies:
>
>> Possibly a more interesting test [than the robotic version of
>> the Total Turing Test] would be to give the computer
>> direct control of the video bit map and let it synthesize an
>> image of a human being.
>
> Manipulating digital "images" is still only symbol-manipulation. [...]
Very well, let's equip the robot with an active RF emitter so
it can jam the camera's electronics and impose whatever bit map it
wishes, whether the camera likes it or not. Too silly? Very well,
let's design a robot in the shape of a back projector, and let it
create internally whatever representation of a human being it wishes
the camera to see, and project it on its screen for the camera to
pick up. Such a robot might do a tolerable job of interacting with
other parts of the "objective" world, using robot arms and whatnot
of more conventional design, so long as it kept them out of the
way of the camera. Still too silly? Very well, let's create a
vaguely anthropomorphic robot and equip its external surfaces with
a complete covering of smaller video displays, so that it can
achieve the minor details of human appearance by projection rather
than by mechanical motion. (We can use a crude electronic jammer to
limit the amount of detail that the camera can see, if necessary.)
Well, maybe our model shop is good enough to do most of the details
of the robot convincingly, so we'll only have to project subtle
details of facial expression. Maybe just the eyes.
Slightly more seriously, if you are going to admit the presence of
electronic or mechanical devices between the subject under test and
the human to be fooled, you must accept the possibility that the test
subject will be smart enough to detect their presence and exploit their
weaknesses. Returning to a more facetious tone, consider a robot that
looks no more anthropomorphic than your vacuum cleaner, but that is
possessed of moderate manipulative abilities and a good visual perceptive
apparatus, and furthermore, has a Swiss Army knife.
Before the test commences, the robot sneakily rolls up to the
camera and removes the cover. It locates the connections for the
external video output, and splices in a substitute connection to
an external video source which it generates. Then it replaces the
camera cover, so that everything looks normal. And a test time,
the robot provides whatever image it wants the testers to see.
A dumb robot might have no choice but to look like a human being
in order to pass the test. Why should a smart one be so constrained?
-- Jay Freeman
------------------------------
Date: Mon 27 Oct 86 20:02:39-EST
From: Albert Boulanger <ABOULANGER@G.BBN.COM>
Subject: Turing Test
I think it is amusing and instructive to look at real attempts of the
turing test.
One interesting attempt is written up in the post scriptum of the
chapter:
"A Coffeehouse Conversation on the Turing Test"
Metamagical Themas
Douglas Hofstadter
Basic Books 1985
Albert Boulanger
BBN Labs
------------------------------
Date: 27 Oct 86 17:23:31 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov
Subject: Pseudomath about the Turing Test: Reply to Padin
[Until the problem of follow-up articles to mod.ai through Usenet is
straightened out, I'm temporarily responding to mod.ai on net.ai.]
In mod.ai, in Message-ID: <8610270723.AA05463@ucbvax.Berkeley.EDU>,
under the subject heading THE PSEUDOMATH OF THE TURING TEST,
PADIN@FNALB.BITNET writes:
> DEFINE THE SET Q={question1,question2,...}. LETS NOTE THAT
> FOR EACH q IN Q, THERE IS AN INFINITE NUMBER OF RESPONSES (THE
> RESPONSES NEED NOT BE RELEVANT TO THE QUESTION, THEY JUST NEED TO BE
> RESPONSES). IN FACT, WE CAN DEFINE A SET R={EVERY POSSIBLE RESPONSE TO
> ANY QUESTION}, i.e., R={r1,r2,r3,...}.
Do pseudomath and you're likely to generate pseudoproblems. Nevertheless,
this way of formulating it does inadvertently illustrate quite clearly why
the symbolic version of the turing test is inadequate and the robotic version
is to be preferred. The symbolic version is equivalent to the proverbial
monkey's chances of typing Shakespeare by combinatorics. The robotic version
(pending the last word on basic continuity/discontinuity in microphysics) is
then no more or less of a combinatorial problem than Newtonian Mechanics.
[Concerning continuity/discreteness, join the ongoing discussion on the
A/D distinction that's just started up in net/mod.ai.]
> THE EXISTENCE OF ...A FUNCTION T THAT MAPS A QUESTION q TO A SET
> OF RESPONSES RR... FOR ALL QUESTIONS q IS EVIDENCE FOR THE PRESENCE
> OF MIND SINCE T CHOOSES, OUT OF AN INFINITE NUMBER OF RESPONSES,
> THOSE RESPONSES THAT ARE APPROPRIATE TO AN ENTITY WITH A MIND.
Pare off the pseudomath about "choosing among infinities" and you just
get a restatement of the basic intuition behind the turing test: That an
entity has a mind if it acts indistinguishably from an entity with a
mind.
> NOW A PROBLEM [arises]: WHO IS TO DECIDE WHICH SUBSET OF RESPONSES
> INDICATES THE EXISTENCE OF MIND? WHO WILL DECIDE WHICH SET IS
> APPROPRIATE TO INDICATE AN ENTITY OTHER THAN OURSELVES IS OUT THERE
> RESPONDING?
The same one who decides in ongoing, everyday "solutions" to the
other-minds problem. And on exactly the same basis:
indistinguishability of performance.
> [If] WE GET A RESPONSE WHICH APPEARS TO BE RANDOM, IT WOULD SEEM THAT
> THIS WOULD BE SUFFICIENT TO LABEL [the] RESPONDENT A MINDLESS ENTITY.
> HOWEVER, IT IS THE EXACT RESPONSE ONE WOULD EXPECT OF A SCHIZOPHRENIC.
When will this tired prima facie objection (about schizophrenia,
retardation, aphasia, coma, etc.) at last be laid to rest? Damaged
humans inherit the benefit of the doubt from what we know about their
biological origins AND about the success of their normal counterparts in
passing the turing test. Moreover, there is no problem in principle
with subhuman or nonhuman performance -- in practice we turing-test
animals too -- and this too is probably parasitic on our intuitions
about normal human beings (although the evolutionary order was
probably vice versa).
Also, schizophrenics don't just behave randomly; if a candidate just
behaved randomly it would not only justifiably flunk the turing test,
but it would not survive either. (I don't even know what behaving
purely randomly might mean; it seems to me the molecules would never
make it through embryogeny...) On the other hand, which of us doesn't
occasionally behave randomly, and some more often than other?. We can
hardly expect the turing test to provide us with the criteria for extreme
conditions such as brain death if even biologists have problems with that.
All these exotic variants are pseudoproblems and red herrings,
especially when we are nowhere in our progress in developing a system
that can give the normal version of the turing test a run for its money.
> NOW IF WE ARE TO USE OUR JUDGEMENT IN DETERMINING THE PRESENCE OF
> ANOTHER MIND, THEN WE MUST ACCEPT THE POSSIBILITY OF ERROR INHERENT
> IN THE HUMAN DECISION MAKING PROCESS. AT BEST,THEN, THE TURING TEST
> WILL BE ABLE TO GIVE US ONLY A HINT AT THE PRESENCE OF ANOTHER MIND;
> A LEVEL OF PROBABILITY.
What else is new? Even the theories of theoretical physics are only
true with high probability. There is no mathematical proof that our
inferences are entailed with necessity by the data. This is called
"underdetermination" and "inductive risk," and it is endemic to all
empirical inquiry.
But besides that, the turing test has even a second layer of
underdermination that verges on indeterminacy. I have argued that it
has two components: One is the formal theorist's task of developing a
device that can generate all of our performance capacities, i.e.,one
that can pass the Total Turing Test. So far, with only "performance
capacity" having been mentioned, the level of underdetermination is
that of ordinary science (it may have missed some future performance
capacity, or it may fail tomorrow, or it may just happen to accomplish
the same performance in a radically different way, just as the
universe may happen to differ from our best physical theory).
The second component of the turing test, however, is informal, intuitive
and open-ended, and it's the one we usually have in mind when we speak of
the turing test: Will a normal human being be able to tell the candidate
apart from someone with a mind? The argument is that
turing-indistinguishability of (total) performance is the only basis
for making that judgment in any case.
Fallible? Of course that kind of judgment is fallible. Certainly no less
fallible than ordinary scientific inference; and (I argue) no more fallible
than our judgments about other minds. What more can one ask? Apart from the
necessary truths of mathematics, the only other candidate for a nonprobabilistic
certainty is our direct ("incorrigible") awareness of our OWN minds (although
even there the details seem a bit murky...).
Stevan Harnad
{allegra, bellcore, seismo, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 28 Oct 86 08:40:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: son of yet more wrangling on Searle, Turing, Quine, Hume, ...
Warning: the following message is long and exceeds the FDA maximum
daily recommended dosage of philosophizing. You have been warned.
This is the exchange that kicked off this whole diatribe:
>>> Harnad: there is no rational reason for being more sceptical about robots'
>>> minds (if we can't tell their performance apart from that of people)
>>> than about (other) peoples' minds.
>> Cugini: One (rationally) believes other people are conscious BOTH because
>> of their performance and because their internal stuff is a lot like
>> one's own.
> This is a very important point and a subtle one, so I want to make
> sure that my position is explicit and clear: I am not denying that
> there exist some objective data that correlate with having a mind
> (consciousness) over and above performance data. In particular,
> there's (1) the way we look and (2) the fact that we have brains. What
> I am denying is that this is relevant to our intuitions about who has a
> mind and why. I claim that our intuitive sense of who has a mind is
> COMPLETELY based on performance, and our reason can do no better. These
> other correlates are only inessential afterthoughts, and it's irrational
> to take them as criteria.
This riposte seems implausible on the face of it. You seem to want
to pretend that we know absolutely nothing about the basis of thought
in humans, and to "suppress" all evidence based on such knowledge.
But that's just wrong. Brains *are* evidence for mind, in light of
our present knowledge.
> My supporting argument is very simple: We have absolutely no intuitive
> FUNCTIONAL ideas about how our brains work. (If we did, we'd have long
> since spun an implementable brain theory from our introspective
> armchairs.) Consequently, our belief that brains are evidence of minds and
> that the absence of a brain is evidence of the absence of a mind is based
> on a superficial black-box correlation. It is no more rational than
> being biased by any other aspect of appearance, such as the color of
> the skin, the shape of the eyes or even the presence or absence of a tail.
Hoo hah, you mean to say that belief based on "black-box correlation"
is irrational in the absence of a fully-supporting theoretical
framework? Balderdash. People in, say, 1500 AD were perfectly rational
in predicting tides based on the position of the moon (and vice-versa)
even though they hadn't a clue as to the mechanism of interaction.
If you keep asking "why" long enough, *all* science is grounded on
such brute-fact correlation (why do like charges repel, etc.) - as
Hume pointed out a while back.
> To put it in the starkest terms possible: We wouldn't know what device
> was and was not relevantly brain-like if it was staring us in the face
> -- EXCEPT IF IT HAD OUR PERFORMANCE CAPACITIES (i.e., it could pass
> the Total Turing Test). That's the only thing our intuitions have to
> go on, and our reason has nothing more to offer either.
Except in the case of actual other brains (which are, by definition,
relevantly brain-like). The only skepticism open to one is that
one's own brain is unique in its causal powers - possible, but hardly
the best rational hypothesis.
> People were sure (as sure as they'll ever be) that other people had
> minds long before they ever discovered they had brains. I myself believed
> the brain was just a figure of speech for the first dozen or so years of
> my life. Perhaps there are people who don't learn or believe the news
> throughout their entire lifetimes. Do you think these people KNOW any
> less than we do about what does or doesn't have a mind? ...
Let me re-cast Harnad's argument (perhaps in a form unacceptable to him):
We can never know any mind directly, other than our own, if we take
the concept of mind to be something like "conscious intelligence" -
ie the intuitive (and correct, I believe) concept, rather than
some operational definition, which has been deliberately formulated
to circumvent the epistemological problems. (Harnad, to his credit,
does not stoop to such positivist ploys.) But the only external
evidence we are ever likely to get for "conscious intelligence"
is some kind of performance. Moreover, the physical basis for
such performance will be known only contingently, ie we do not
know, a priori, that it is brains, rather than automatic dishwashers,
which generate mind, but rather only as an a posteriori correlation.
Therefore, in the search for mind, we should rely on the primary
criterion (performance), rather than on such derivative criteria
as brains.
I pretty much agree with the above account except for the last sentence
which prohibits us from making use of derivative criteria. Why should
we limit ourselves so? Since when is that part of rationality?
No, the fact is we do have more reason to suppose mind of other
humans than of robots, in virtue of an admittedly derivative (but
massively confirmed) criterion. And we are, in this regard, in a
epistemological position *superior* to those who don't/didn't know
about such things as the role of the brain, ie we have *more* reason
to believe in the mindedness of others than they do. That's why
primitive tribes (I guess) make the *mistake* of attributing
mind to trees, weather, etc. Since raw performance is all they
have to go on, seemingly meaningful activity on the part of any
old thing can be taken as evidence of consciousness. But we
sophisticates have indeed learned a thing or two, in particular, that
brains support consciousness, and therefore we (rationally) give the
benefit of the doubt to any brained entity, and the anti-benefit to
un-brained entities. Again, not to say that we might not learn about
other bases for mind - but that hardly disparages brainedness as a
rational criterion for mindedness.
Another point, which I'll just state rather than argue for is that
even performance is only *contingently* a criterion for mind - ie,
it so happens, in this universe, that mind often expresses itself
by playing chess, etc., just as it so happens that brains cause
minds. And so there's really not much difference between relying on
one contingent correlate (performance) rather than another (brains)
as evidence for the presence of mind.
> > Why is consciousness a red herring just because it adds a level
> > of uncertainty?
>
> Perhaps I should have said indeterminacy. If my arguments for
> performance-indiscernibility (the turing test) as our only objective
> basis for inferring mind are correct, then there is a level of
> underdetermination here that is in no way comparable to that of, say,
> the unobservable theoretical entities of physics (say, quarks, or, to
> be more trendy, perhaps strings). Ordinary underdetermination goes
> like this: How do I know that your theory's right about the existence
> and presence of strings? Because WITH them the theory succeeds in
> accounting for all the objective data (let's pretend), and without
> them it does not. Strings are not "forced" by the data, and other
> rival theories may be possible that work without them. But until these
> rivals are put forward, normal science says strings are "real" (modulo
> ordinary underdetermination).
> Now try to run that through for consciousness: How do I know that your
> theory's right about the existence and presence of consciousness (i.e.,
> that your model has a mind)? "Because its performance is
> turing-indistinguishable from that of creatures that have minds." Is
> your theory dualistic? Does it give consciousness an independent,
> nonphysical, causal role? "Goodness, no!" Well then, wouldn't it fit
> the objective data just as well (indeed, turing-indistinguishably)
> without consciousness? "Well..."
> That's indeterminacy, or radical underdetermination, or what have you.
> And that's why consciousness is a methodological red herring.
I admit, I have trouble following the line of argument above. Is this
Quine's "it's real if it's a term in our best-confirmed theories"
approach? But I think Quine is quite wrong, if that is his
assertion. I know consciousness (my own, at least) exists, not as
some derived theoretical construct which explains low-level data
(like magnetism explains pointer readings), but as the absolutely
lowest rock-bottom datum there is. Consciousness is the data,
not the theory - it is the explicandum, not the explicans (hope
I got that right). It's true that I can't directly observe the
consciousness of others, but so what? That's an epistemological
inconvenience, but it doesn't make consciousness a red herring.
> I don't know what you mean, by the way, about always being able to
> "engineer anything with anything at some level of abstraction." Can
> anyone engineer something to pass the robotic version of the Total
> Turing Test right now? And what's that "level of abstraction" stuff?
> Robots have to do their thing in the real world. And if my
> groundedness arguments are valid, that ain't all done with symbols
> (plus add-on peripheral modules).
The engineering remark was to re-inforce the idea that, perhaps,
being-composed-of-protein might not be as practically incidental
as many assume. Frinstance, at some level of difficulty, one can
get energy from sunlight "as plants do." But the issues are:
do we get energy from sunlight in the same way? How similar do
we demand that the processes are? It might be easy to be as
efficient as plants in getting energy from sunlight through
non-biological technology. But if we're interested in simulation at
a lower level of abstraction, eg, photosynthesis, then, maybe, a
non-biological approach will be impractical. The point is we know we
can simulate human chess-playing abilities with non-biological
technology. Should we just therefore declare the battle for mind won,
and go home? Or ask the harder question: what would it take to get a
machine to play a game of chess like a person does, ie, consciously.
BTW, I quite agree with your more general thesis on the likely
inadequacy of symbols (alone) to capture mind.
John Cugini <Cugini@NBS-VMS>
------------------------------
End of AIList Digest
********************
∂03-Nov-86 0232 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #243
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 3 Nov 86 02:32:24 PST
Date: Sun 2 Nov 1986 22:26-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #243
To: AIList@SRI-STRIPE
AIList Digest Monday, 3 Nov 1986 Volume 4 : Issue 243
Today's Topics:
Games - Chess,
Seminars - Aid To Database Design (UPenn) &
A Circumscriptive Theory of Plan Recognition (BBN),
Conferences - Sydney Expert Systems Conference &
European Conference on Object Oriented Programming
----------------------------------------------------------------------
Date: 28 Oct 86 11:35:39 GMT
From: mcvax!unido!ab@seismo.css.gov (ab)
Subject: Re: Places in Vancouver? Really Chess programs.
> I don't really understand why there are not any really good chess
> programs available for home computers. Fidelity has a machine
> with an official USCF rating of 2100 for 200 bucks. I am pretty
> sure that this has an 8 bit processor. Someone should be able to
> come up with a 68k program that is better than this!
Did you hear of the recent PSION-CHESS program for the Atari ST?
This is a completely new program developed by Richard Lang. It uses
heuristic search instead of the alpha-beta-procedure. This means that
the program can examine the game tree to arbitrary depth. It uses a highly
selective search to investigate the interesting lines of play. Moreover
its playing style is very aggressive. The search concentrates on lines
of play which are tactically sharp and which force the opponent to play
in a way which can be easily predicted. So not necessarily the best
move is played but the tactically sharpest with reasonable outcome.
This means that a depth of up to 20 plies can be forced and a gain of
material in let's say 8 plies is recognized.
The program can display up to 8 plies of its current expected moves.
There exist two ways of displaying the board: 3d and 2d. You can set the
board to an arbitrary position and there exist levels of play from
novice (1 sec) to expert (same time) and infinity. Also there are
problem modes for forced check mates. The program normally 'thinks'
while its opponent has to move, but with the feature 'handicap' this
can be disabled. A lot of other features are supported which could be
mentioned.
It seems to me that this program is identical to the Mephisto Munchen
with Amsterdam-modul since that one also uses the same strategy, the
same processor and is also by Richard Lang. If true that would mean
that PSION-CHESS alias Mephisto-Munchen is the recent world champion
of microcomputer chess (championship in Amsterdam fall 1985).
Has anyone further information on this program or on its strength?
I am particularly interested in the new programing approach realized
in this program. There exist some articles by Larry R. Harris about
heuristic search in chess, but these articles date back to 1975.
Are there other available programs which use the new approach?
Andreas Bormann
University of Dortmund [UniDo]
West Germany
Uucp: ab@unido.uucp
Path: {USA}!seismo!{mcvax}!unido!ab
{Europe}!{cernvax,diku,enea,ircam,mcvax,prlb2,tuvie,ukc}!unido!ab
Bitnet: ab@unido.bitnet (== ab@ddoinf6.bitnet)
[ Followups will be directed to net.games.chess only.]
[ Any thoughts or opinions which may or may not have been expressed ]
[ herein are my own. They are not necessarily those of my employer. ]
[ Also I have no ambitions to sell PSION-CHESS or Mephisto computers.]
------------------------------
Date: Tue, 28 Oct 86 23:06 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Aid To Database Design (UPenn)
Dissertation Defense
Aid To Database Design: An Inductive
Inference Approach
Sitaram Lanka
The conventional approach to the design of databases has the drawback that
to specify a database schema, it requires the user interested in designing a
schema to have the knowledge about both the domain and the data model. The
aim of this research is to propose a semi automated system which designs a
database schema in which the user need only have the knowledge of the
underlying domain. This is expressed in terms of the information retrieval
requirements that the database has to satisfy eventually. We have cast this
as a problem in inductive inference where the input is in the form of
Natural Language English queries. A database schema is inferred from this
and is expressed in the functional data model.
The synthesis of the database schema from the input queries is carried out
by an inference mechanism. The central idea in designing the inference
mechanism is the notion of compositionality and we have described it in
terms of attribute grammars due to Kunth. A method has been proposed to
detect any potentially false hypothesis that the inference mechanism may put
forth and we have proposed a scheme to refine them such that we will obtain
acceptable hypothesis. A prototype has been implemented on the Symbolics
Lisp machine.
Committee
Dr. P. Buneman
Dr. T. Finin (chairman)
Dr. R. Gerritsen Supervisor
Dr. A.K. Joshi Supervisor
Dr. R.S. Nikhil
Dr. B. Webber
Date: October 31, 1986
Time: 2:30 pm
Location: Room 23
------------------------------
Date: Fri, 31 Oct 86 20:47:49 EST
From: "Steven A. Swernofsky" <SASW%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU>
Subject: Seminar - A Circumscriptive Theory of Plan Recognition (BBN)
From: Brad Goodman <BGOODMAN at BBNG.ARPA>
BBN Laboratories
Science Development Program
AI/Education Seminar
Speaker: Henry Kautz
Dept. of Computer Science, University of Rochester
(Henry@Rochester.Arpa)
Title: A CIRCUMSCRIPTIVE THEORY OF PLAN RECOGNITION
Date: 10:30a.m., Thursday, November 20th
Location: 3rd floor large conference room,
BBN Laboratories Inc., 10 Moulton St., Cambridge
Abstract
A plan library specifies the abstraction and decomposition relations
between actions. A typical first-order representation of such a library
does not, by itself, provide grounds for recognizing an agent's plans, given
observations of the agent's actions. Several additional assumptions are
needed: that the abstraction hierarchy is complete; that the
decomposition hierarchy is complete; and that the agent's actions are, if
possible, all part of the same plan. These assumptions are developed
through the construction of a certain class of minimal models of the plan
library. Circumscription provides a general non-constructive method for
specifying a class of minimal models. For the specific case at hand,
however, we can mechanically generate a set of first-order axioms which
precisely capture the assumptions. The result is a "competence theory" of
plan recognition, which correctly handles such difficult matters as
disjunctive observations and multiple plans. The theory may be partially
implemented by efficient (but limited) algorithms.
------------------------------
Date: Mon, 27 Oct 86 17:43:23 EST
From: Jason Catlett <munnari!basser.oz!jason@seismo.CSS.GOV>
Subject: Call for Papers, Sydney Expert Systems Conference
>From moncskermit!munnari!seismo!ut-sally!pyramid!hplabs!hplabsc!taylor
>From: taylor@hplabsc.UUCP (Dave Taylor)
Newsgroups: mod.conferences
Subject: Call-For-Papers: Sydney Expert Systems Conference
Location: Sydney, Australia
CALL FOR PAPERS
The Third Australian Conference on Applications of Expert Systems
Sydney, 13-15 May
The Sydney Expert Systems Group has organised two successful
conferences on this theme, including keynote addresses from
internationally-recognised authorities
Bruce Buchanan (Stanford University), Donald Michie (Turing Institute),
Neil Pundit (Digital Equipment Corporation, USA), Donald
Waterman (Rand Corporation) and Patrick Winston (M.I.T.).
The 1987 conference will continue this tradition, with addresses from
distinguished overseas speakers and Australian experts.
Papers are invited on any aspect of expert systems technology, including
- examples of expert systems that have been developed for
particular applications
- design and evaluation of tools for building expert systems
- knowldege engineering methodology
- specialised hardware for expert systems
Contributions that discuss the authors' experiences/successes/
lessons learned in building expert systems will be
particularly welcome. Papers of any size will be considered but
a length of 15-30 pages is recommended. All accepted papers will
be published in the Proceedings.
Authors should note the following dates:
Deadline for papers: 30th January 1987
Notification of acceptance: 13th March 1987
Deadline for camera-ready copy: 10th April 1987
Presentation of paper: 13-15th May 1987
Papers should be sent to the Program Chairman,
Dr J. R. Quinlan
School of Computing Sciences
NSW Institute of Technology
Broadway NSW 2007
Australia
Requests for registration forms should be sent to "ES Conference
Registrations, c/o Dr John Debenham" at the above address.
------------------------------
Date: Thu, 30 Oct 1986 10:50 EST
From: HENRY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: European Conference on Object Oriented Programming
Date: Mon, 6 Oct 86 18:25:23 -0100 (MET)
From: pierre Cointe <mcvax!cmirh!pc at seismo.CSS.GOV>
To: henry at ai.ai.mit.edu
EUROPEAN CONFERENCE ON OBJECT ORIENTED PROGRAMMING
Call for Papers
Paris, France: June 15-17 1987
Following the AFCET group's three previous Working Sessions
on Object Oriented Languages next encounter will take place
at the Centre Georges Pompidou (Paris) on June 15th, 16th &
17th 1987. With regard to the success of the previous workshops
and to the increasing interest on the subject, the next meeting
will be an international conference organized by AFCET.
The program committee is:
G. Attardi, DELPHI, Italy
J. Bezivin, LIB (UBO & ENSTbr), France
P. Cointe, CMI & LITP, France
S. Cook, London University, England
J.M. Hullot, INRIA, France
B. Kristensen, Aalborg University Center, Denmark
H. Lieberman, MIT, USA
L. Steels, Brussels University, Belgium
H. Stoyan, Konstanz University, West German
B. Stroustrup, AT&T Bell Labs, USA
J. Vaucher, Montreal University, Canada
A. Yonezawa, Tokyo Institut of Technology, Japan
The conference will consist of a presentation of selected papers.
Well-known researchers having made major contributions in the field
- like C. Hewitt and K. Nygaard - will also give invited lectures.
This new conference will deal with all domains using the techniques
and methodologies of Object Oriented Programming. It is likely to
interest both software designers and users.
Proposed themes are the following:
- Theory :
semantic models (instantiation, inheritance), compilation
- Conception :
new languages, new hardwares, new extensions of languages
- Applications :
man/machine interfaces, simulation, knowledge representation,
data bases, operating systems
- Methodology :
Smalltalk-80 methodology, actor methodology,
frame methodology, the abstract type approach
- Development :
industrial applications.
The papers must be submitted in English and should not be longer
than ten pages. Five copies must be received at one of the address
below, no later than January 9th,1987 (and, if possible, by electronic
mails to the conference co-chairmen). Papers selection will be done
by circulating papers to members of the program committee having
appropriate expertise. Authors will be notified of acceptance by
February, 15th 1987. To be included in the Proceedings the definitive
version of the paper must reach the AFCET office before April, 27th 1987.
- Conference Co-chairmen
- J.M. Hullot (INRIA)
mcvax!inria!hullot
- J. Bezivin (LIB)
mcvax!inria!geocub!bezivin
- Program Co-chairmen
- P. Cointe (LITP)
mcvax!inria!cointe
- H. Lieberman (MIT)
mcvax!ai.ai.mit.edu!henry
- USA Coordinator
- B. Stroustrup (AT&T, Bell Labs)
mcvax!research!snb!bs
Murray Hill, Nj 07974 USA
(201 582 7393)
- Organization
- Claire Van Hieu
AFCET
156 Boulevard Pereire
75017 Paris, France
(1) 47.66.24.19
Following the conference - and in the same place - Jerome Chailloux
and Christian Queinnec will organize on June 18th and 19th a workshop
about Lisp and its standardization.
People interested in Tutorials, Workshops or Exhibitions may contact
the AFCET organization.
------------------------------
End of AIList Digest
********************
∂03-Nov-86 0424 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #244
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 3 Nov 86 04:24:30 PST
Date: Sun 2 Nov 1986 22:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #244
To: AIList@SRI-STRIPE
AIList Digest Monday, 3 Nov 1986 Volume 4 : Issue 244
Today's Topics:
Query - AI in Rehabilitation Med,
AI Tools - Guru & PD Parser for Simple English Sentences,
Representations - Music,
Logic - Monotonicity,
Review - Weizenbaum Keynote Address at U of Waterloo
----------------------------------------------------------------------
Date: 30 Oct 86 23:15:35 EST
From: Steve blumenfrucht <BLUMENFRUCHT@RED.RUTGERS.EDU>
Subject: AIM in Rehabilitation Med
I am trying to find people doing artificial intelligence work in
the medical specialty of Physical Medicine and Rehabilitation.
I am especially interested in finding MDs doing this.
Help/suggestions are appreciated. Reply to BLUMENFRUCHT@RUTGERS
------------------------------
Date: 29 Oct 86 14:37:31 GMT
From: ihnp4!houxm!mtuxo!mtune!mtunf!mtx5c!mtx5d!mtx5a!mtx5e!mtx5w!drv@
ucbvax.Berkeley.EDU
Subject: Re: OPINIONS REQUESTED ON GURU
> I'D APPRECIATE ANY COMMENTS THE GROUP HAS ON THE AI BASED PACKAGE <GURU>.
>
I had an evaluation copy of Guru here about a month ago.
I found it an interesting package with a lot of nice
features. I decided not to use it for a lot of reasons
specific to my application but I'll try not to let them
get in the way of my evaluation.
First, a short description of what Guru has. In addition
to a language and a set of features for creating rule-based
systems, Guru contains a text editor, a spreadsheet, a communications
package, a graphics package, a relational data base package, a
Unix shell-like procedural language, a menu and user prompt
facility and probably a few other things I've forgotten. The
rule-based system, editor and spreadsheet are the parts I looked
into most so my comments will be limited to those.
The editor and spreadsheet are not what you would call state-of-the-art.
There are standalone packages available for most PCs that are as
nice or nicer than Guru's in my opinion. While the menu interface
to Guru and the graphics package make nice use of the PC graphics,
neither the editor nor the spreadsheet use any graphics. It appears
that the Guru folks purchased these packages from outside and
integrated them in to their total system. That opinion is based
on nothing other than the rather different appearance these modules
have from each other.
The novel and nice feature that Guru has that prompted my to look
into it in the first place is the ability to reference different
portions of Guru from others. For example, within a spreadsheet
you can reference a rule-based system (which can access the data
in the spreadsheet) and fill in cells with results from a rule-
based execution (called a consultation in Guru). Similarly, within
the editor you can access the data base for results to be added to the
text, access the data base from within a rule based system, etc.
I spent a fair amount of time with the spreadsheet accessing
rules in a rule-based system. While I had a few difficulties due
to the way the rules address spreadsheet cells, I found the
procedure to work fairly well.
One thing that turned me off from Guru, in addition to the mismatch
with my intended application, was the price tag. $3000 seemed a
bit steep for me. But if you need most or many of the different
features rather than just a couple it might be a better investment
instead of buying separate components. And if you need to have
the integration between components such as spreadsheet and rule-based
system, I know of no other tool that does that. Then the price
might be well worth it.
Good luck and I hope this helps.
Dennis R. Vogel
AT&T Information Systems
Middletown, NJ
(201) 957-4951
------------------------------
Date: 31 Oct 86 15:43:16 GMT
From: ihnp4!drutx!mtuxo!mtune!mtunf!mtx5c!mtx5d!mtx5a!mtx5e!mtx5w!drv@
ucbvax.Berkeley.EDU
Subject: More on Guru
I recently posted my experience with the Guru package.
In it I mentioned that the $3000 price tag scared me
off (in addition to other things). Well, it's worse
than that. Today we got a letter from Guru saying that
the introductory period for Guru has drawn to a close
along with the $2995 introductory price. Guru is now
priced at $6500 for a single user development system.
I should mention that Guru does offer run-time licenses
for less than this but the latest letter doesn't say what
they cost.
Dennis R. Vogel
AT&T Information Systems
Middletown, NJ
(201) 957-4951
------------------------------
Date: 30 Oct 86 23:13:47 GMT
From: fluke!ssc-vax!bcsaic!michaelm@beaver.cs.washington.edu
(Michael Maxwell)
Subject: Re: Seeking PD parser for simple English sentences.
In article <30@orion.UUCP> heins@orion.UUCP (Michael Heins) writes:
>I am looking for public domain software which I can use to help me parse
>simple English sentences into some kind of standardized representation.
>I guess what I am looking for would be a kind of sentence diagrammer
>which would not have to have any deep knowledge of the meanings of the
>nouns, verbs, adjectives, etc.
>
>...C routines would be ideal. Also, references to published
>algorithms would be useful.
Since this seems to be a fairly common request, I am taking the liberty of
posting to the net...
Many Prologs (but not Turbo) have a built-in parser called `Definite Clause
Grammar' (DCG). It is a way of writing phrase structure rules, which Prolog
then translates into standard Prolog rules. Most standard texts on Prolog
discuss it, e.g.
%A W.F. Clocksin
%A C.S. Mellish
%D 1984
%T Programming in Prolog
%I Springer-Verlag
%C Berlin
A somewhat more sophisticated rule system was developed by Fernando Pereira in
his Ph.D. dissertation, published with some revision as:
%A Fernando Pereira
%D 1979
%T Extraposition Grammars
%R Working Paper No. 59
%I Department of Aritficial Intelligence, University of Edinburgh
%C Edinburgh
(You'd have to type the program in yourself; he includes a very simple
grammar of English.)
--
Mike Maxwell
Boeing Advanced Technology Center
...uw-beaver!uw-june!bcsaic!michaelm
------------------------------
Date: Mon, 27 Oct 86 16:47:16 EST
From: "William J. Rapaport" <rapaport%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: KR for Music
[Forwarded from the NL-KR Digest.]
For information on KR and music, see:
Ebcioglu, Kemal, "An Expert System for Chorale Harmonization," Proc.
AAAI-86, Vol. 2, pp. 784-788.
Ebcioglu, Kemal, "An Expert System for Harmonization of Chorales in the
Style of J. S. Bach," Tech. Report, Dept. of Computer Science, SUNY
Buffalo (1986).
------------------------------
Date: Fri, 31 Oct 86 15:54:18 est
From: lb0q@andrew.cmu.edu (Leslie Burkholder)
Subject: monotonicity
The monotonicity property of validity: If an argument is deductively valid
then it cannot be made invalid by adding new premises. Equivalently: If X, Y
are finite sets of sentences and S a sentence, then if X entails S, then X
union Y entails S.
The monotonicity property of consistency: If a set of sentences is
inconsistent then it cannot be made consistent by adding to it a new
sentence. Equivalently: If X is a finite set of sentences, S some sentence,
and X is inconsistent, then so is X union {S}.
Leslie Burkholder
------------------------------
Date: 2 Nov 86 04:04:36 GMT
From: rutgers!clyde!watmath!watnot!watdcsu!brewster@seismo.css.gov
(dave brewer, SD Eng, PAMI )
Subject: Weizenbaum keynote address at U of Waterloo (long)
The Hagey Lectures at the University of Waterloo provide an
opportunity for a distinguished researcher to address the
community at large every year. This year, Dr. Weizenbaum of
MIT was the chosen speaker, and he has just delivered two
key note addresses entitled; "Prospects for AI" and "The Arms
Race, Without Us".
The important points of the first talk can be summarized as :
1) AI has good prospects from an investment prospective since
a strong commitment to marketing something called AI has
been made.
2) the early researchers did not understand how difficult
the problems they addressed were and so the early claims
of the possibilities were greatly exaggerated. The trend
still continues but on a reduced scale.
3) AI has been a handle for some portion of the US military
to hang SDI on, since whenever a "difficult" problem
arises it is always possible to say , " Well, we don't
understand that now, but we can use AI techniques to
solve that problem later."
4) the actual achievements of AI are small.
5) the ability of expert systems to continuously monitor
stock values and react has led to increased volatility
and crisis situations in the stock markets of the world
recently. What happens if machine induced technical trading
drops the stock market by 20 % in one day , 50 % in one day ?
The important points of the second talk can be summarized as :
1) not all problems can be reduced to computation, for
example how could you conceive of coding the human
emotion loneliness.
2) AI will never duplicate or replace human intelligence
since every organism is a function of its history.
3) research can be divided into performance mode or theory
mode research. An increasing percentage of research is
now conducted in performance mode, despite possible
desires to do theory mode research, since funds (mainly
military), are available for performance mode research.
4) research on "mass murder machines" is possible because
the researchers (he addressed computer scientists
directly although extension to any technical or
scientific discipline was implied), are able to
psychologically distance themselves from the end use
of their work.
5) technical education that neglects language, culture,
and history, may need to be rethought.
6) courage is infectious, and while it may not seem to be
a possibility to some, the arms race could be stopped cold
if an entire group of professions, (ie computer scientists),
refused to participate.
7) the search for funds has led to an increased rate of
performance mode research, and has even induced many
institutions to prostitute themselves to the highest bidder.
Specific situations within MIT were used for examples.
Weizenbaum had the graciousness to ignore related (albeit
proportionally smaller), circumstances at this
university.
8) every researcher should assess the possible end use of
their own research, and if they are not morally comfortable
with this end use, they should stop their research. Weizenbaum
did not believe that this would be the end of all research,
but if that was the case then he would except this result.
He specifically referred to research in machine vision, which he
felt would be used directly and immediately by the military for
improving their killing machines. While not saying so, he implied
that this line of AI should be stopped dead in its tracks.
Poster's comments :
1) Weizenbaum seemed to be technically out of date in some areas,
and admitted as much at one point. Some of his opinions
regarding state of the art were suspect.
2) His background, technical and otherwise, seems to predispose
him to dismissing some technical issues a priori. i.e. a machine
can never duplicate a human, why ?, because !.
3) His most telling point, and one often ignored, is that
researchers have to be responsible for their work, and should
consider its possible end uses.
4) He did not appear to have thought through all the consequences
of a sudden end to research, and indeed many of his solutions
appear overly simplistic, in light of the complicated
world we live in.
5) You have never seen an audience squirm, as they did for the
second lecture. A once premier researcher, addresses his
contemporaries, and tells them they are ethically and morally
bankrupt, and every member of the audience has at least some
small buried doubt that maybe he is right.
6) Weizenbaum intended the talks to be "controversial and
provocative" and has achieved his goal within the U of W
community. While not agreeing with many of his points, I
believe that there are issues raised which are relevant to
the entire world-wide scientific community, and have posted
for this reason.
The main question that I see arising from the talks is : is it time
to consider banning, halting, slowing, or otherwise rethinking
certain AI or technical adventures, such as machine vision, as was
done in the area of recombinant DNA.
Disclaimer : The opinions above are mine and may not accurately
reflect those of U of Waterloo, Dr.Weizenbaum, or
anyone else for that matter. I make no claims as
to the accuracy of the above summarization and advise
that transcripts of the talks are available from some
place within U of W, but expect to pay for them because
thats the recent trend.
UUCP : {decvax|ihnp4}!watmath!watdcsu!brewster
Else : Dave Brewer, (519) 886-6657
------------------------------
Date: 3 Nov 86 02:48:23 GMT
From: tektronix!reed!trost@ucbvax.Berkeley.EDU (Bill Trost)
Subject: Re: Weizenbaum keynote address at U of Waterloo (long)
In article <2689@watdcsu.UUCP> brewster@watdcsu.UUCP (dave brewer,
SD Eng, PAMI ) writes:
>
>The main question that I see arising from the talks is : is it time
>to consider banning, halting, slowing, or otherwise rethinking
>certain AI or technical adventures, such as machine vision, as was
>done in the area of recombinant DNA.
Somehow, I don't think that banning machine vision makes much sense. It
seems that it would be similar to banning automatic transmissions because
you can use them to make tanks. The device itself is not the hazard (as it
is in genetic research) -- it is the application.
--
Bill Trost, tektronix!reed!trost
"ACK!"
(quoted, without permission, from Bloom County)
------------------------------
End of AIList Digest
********************
∂05-Nov-86 0202 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #245
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 5 Nov 86 02:02:25 PST
Date: Tue 4 Nov 1986 23:21-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #245
To: AIList@SRI-STRIPE
AIList Digest Wednesday, 5 Nov 1986 Volume 4 : Issue 245
Today's Topics:
Queries - Electronics Troubleshooting & Consciousness &
English-Hangul Machine Translation & Belle,
AI Tools - Object-Oriented Programming,
Ethics - Moral Responsibility,
Logic - Nonmonotonic Reasoning,
Linguistics - Nonsense Tests and the Ignorance Principle,
Philosophy - Artificial Humans & Mathematics and Humanity
----------------------------------------------------------------------
Date: 3 Nov 86 07:23 PST
From: kendall.pasa@Xerox.COM
Subject: Electronics Troubleshooting
For AI Digest 11/3/86:
I am looking for some public domain software that would pertain to
electronics troubleshooting. I am interested in any special graphics
editors for electronics schematics and any shells for troubleshooting
the circuits.
Thank you.
------------------------------
Date: Tue, 4 Nov 86 00:41 EDT
From: JUDD%cs.umass.edu@CSNET-RELAY.ARPA
Subject: a query for AIList
I would like references to books or articles that argue the
following position:
"The enigma of consciousness is profound but the basis for it
is very mundane. Consciousness reduces to sensation;
sensation reduces to measurement of nervous events;
nervous events are physical events. Since physical events can
obviously occur in `inanimate matter', so can consciousness."
References to very recent articles are my main objective here,
but old works (ie pre-computer age) would be helpful also.
sj
------------------------------
Date: Mon, 3 Nov 86 20:53:05 EST
From: "Maj. Ken Rose" (USATRADOC | mort) <krose@BRL.ARPA>
Subject: English-Hangul Machine Translation
I would like to connect with anyone who has had some experience with machine
translation between English and Hangul. Please contact me at krose@brl.arpa
or phone (804) 727-2347. May-oo kam s'ham nee da.
------------------------------
Date: 4 Nov 86 15:11 EST
From: Vu.wbst@Xerox.COM
Reply-to: Vu.wbst@Xerox.COM
Subject: Belle
Does anyone have any information about Belle, the strongest chess-playing
program today? It has ELO rating of over 2000. It uses tree-pruning,
and cutoff heuristics.
Is the code available for the public in Interlisp-D? Any pointer would
be very helpful. By the way, what is the complete path for
net.games.chess that was mentioned in V4# 243? Thank you.
Dinh Vu
------------------------------
Date: Mon, 3 Nov 86 12:25:40 -0100
From: Hakon Styri <styri%vax.runit.unit.uninett@nta-vax.arpa.ARPA>
Subject: Re: Is there OOP in AI?
In response to the item in AIList issue No. 231.
Yes, if you have a look in the October issue of SIGPLAN NOTICES,
i.e. the special issue on the Object-Oriented Programming Workshop
at IBM Yorktown Heights, June 1986. At least two papers will be
of interest...
Going back a few years, you may also find some ICOT papers about
OOP in Prolog. Try New Generation Computing, the ICOT/Springer-Verlag
Journal. There are a few papers in Vol. 1, No. 1 (1983).
In the Proceedings of the International Conference on FGCS 1984
there is another paper: "Unique Features of ESP", which is a Logic
Programming Language with features for OOP.
H. Styri -- Yu No Hoo :-)
------------------------------
Date: Mon 3 Nov 86 10:59:37-PST
From: cas <PHayes@SRI-KL.ARPA>
Subject: moral responsibility
The idea of banning vision research ( or any other, for that matter ) is
even sillier and more dangerous than Bill Trost points out. The analogy
is not to ban automatic transmissions, but THINKING about automatic
transmissions. And banning thinking about anything is about as dangerous
as any course of action can be, no matter how highminded or sincerely morally
concerned those who call for it.
To be fair to Weizenbaum, he does have a certain weird consistency. He tells
me, for example, that in his view helicopters are intrinsically evil ( as
the Vietnam war has shown ). One can see how the logic works: if an artifact
is ( or can be ) used to do more bad than good, then it is evil, and research
on evil things is immoral.
While this is probably not the place to start a debate in theoretical ethics,
I do think that this view, while superficially attractive, simply doesnt stand
up to a little thought, and can be used to label as wicked anything which one
dislikes for any reason at all. Weizenbaum has made a successful career by
systematically attacking AI research on the grounds that it is somehow
immoral, and finding a large and willing audience. He doesnt make me squirm.
Pat Hayes
------------------------------
Date: Mon, 3 Nov 86 11:09 ???
From: GODDEN%gmr.com@CSNET-RELAY.ARPA
Subject: banning machine vision
As regards the banning of research on machine vision (or any technical field)
because of the possible end uses of such technology in "mass murder machines",
I should like to make one relevant distinction. The immediate purpose of
the research as indicated in the grant proposal or as implied by the source of
funding is of the utmost importance. If I were to do research on vision for
the auto industry and published my results which some military type then
schlepped for use in a satellite to zap civilians, I would not feel ANY
responsibility if the satellite ever got used. On the other hand, if I
worked for a defense contractor doing the same research, I certainly would
bear some responsibility for its end use.
-Kurt Godden
godden@gmr.com
------------------------------
Date: Sat, 1 Nov 86 16:58:21 pst
From: John B. Nagle <jbn@glacier.stanford.edu>
Reply-to: jbn@glacier.UUCP (John B. Nagle)
Subject: Re: Non-monotonic Reasoning
Proper mathematical logic is very "brittle", in that two axioms
that contradict each other make it possible to prove TRUE=FALSE, from
which one can then prove anything. Thus, AI systems that use
traditional logic should contain mechanisms to prevent the introduction
of new axioms that contradict ones already present; this is referred
to as "truth maintenance". Systems that lack such mechanisms are prone
to serious errors, even when reasoning about things which are not
even vaguely related to the contradictory axioms; one contradiction
in the axioms generally destroys the system's ability to get useful
results.
Non-monotonic reasoning is an attempt to make reasoning systems
less brittle, by containing the damage that can be caused by
contradiction in the axioms. The rules of inference of non-monotonic
reasoning systems are weaker than those of traditional logic. There
is not full agreement on what the rules of inference should be in
such systems. There are those who regard non-monotonic reasoning as
hacking at the mathematical logic level. Non-monotonic reasoning
lies in a grey area between the worlds of logic and heuristics.
John Nagle
------------------------------
Date: Tue, 4 Nov 86 09:30:29 CST
From: mklein@aisung.cs.uiuc.edu (Mark Klein)
Subject: Nonmonotonic Reasoning
My understanding of nonmonotonic reasoning (NMR) is different from what you
described in a recent ailist posting. As I understand it, NMR differs
from monotonic reasoning in that the size of the set of true theorems
can DECREASE when you add new axioms - thus the set size does not
increase monotonically as axioms are added. It is often implemented
using truth maintenance systems (TMS) that allow something to be justified
by something else NOT being believed. Monotonic reasoning, by contrast,
could be implemented by a TMS that only allows something to be justified by
something else being believed. Default reasoning is an instance of
nonmonotonic reasoning. Nonmonotonic reasoning is thus not synonymous with
truth maintenance.
Mark Klein
------------------------------
Date: Thu, 30 Oct 86 17:26:02 est
From: Graeme Hirst <gh%ai.toronto.edu@CSNET-RELAY.ARPA>
Subject: Re: Nonsense tests and the Ignorance Principle
>A couple of years ago, on either this list or Human-Nets, there appeared
>a short multiple-choice test which was written so that one could deduce
>"best" answers based on just the form, not the content, of the questions ...
The Ignorance Principle ("Forget the content, just use the form", or "Why
clutter up the system with a lot of knowledge?") has a long and distinguished
history in AI. In 1975, Gord McCalla and Michael Kuttner published an article
on a program that could successfully pass the written section of the British
Columbia driver's license exam. For example:
QUESTION:
Must every trailer which, owing to size or construction tends to prevent
a driving signal given by the driver of the towing vehicle from being seen
by the driver of the overtaking vehicle be equipped with an approved
mechanical or electrical signalling device controlled by the driver of the
towing vehicle?
ANSWER:
Yes.
In fact, the program was able to answer more than half the questions just by
looking for the words "must", "should", "may", "necessary", "permissible",
"distance", and "required".
The system was not without its flaws. For example:
QUESTION:
To what must the coupling device between a motor-vehicle and trailer
be affixed?
ANSWER:
Yes.
This is wrong; the correct answer is "frame". (This was an early instance of
the frame problem.)
The authors subsequently attempted a similar program for defending PhD theses,
but the results were never published.
REFERENCE
McCalla, G. and Kuttner, M. "An extensible feature-based procedural question
answering system to handle the written section of the British Columbia
driver's examination". ←CSCSI/SCEIO Newsletter← [now published as ←Canadian
Artificial Intelligence←], 1(2), February 1975, 59-67.
\\\\ Graeme Hirst University of Toronto Computer Science Department
//// utcsri!utai!gh / gh@ai.toronto.edu / 416-978-8747
------------------------------
Date: 27 Oct 86 09:38 PST
From: Ghenis.pasa@Xerox.COM
Subject: Why we waste time training machines
>why are we wasting time training machines when we could be training
humans
>instead. The only reasons that I can see are that intelligent systems
can be
>made small enough and light enough to sit on bombs. Are there any
other reasons?
Why do we record music instead of teaching everyone how to sing? To
preserve what we consider top performance and make it easily available
for others to enjoy, even if the performer himself cannot be present and
others are not inclined to or capable of duplicating his work, but
simply wish to benefit from it.
In the case of applied AI there is the added advantage that the
"recordings" are not static but extendable, so the above question may be
viewed as a variation of "to stand on the shoulders of giants" vs. "to
reinvent the wheel".
This is just ONE of the reasons we "waste our time" the way we do.
-- Pablo Ghenis, speaking for myself (TO myself most of the time)
------------------------------
Date: Mon, 27 Oct 86 17:29 CDT
From: stair surfing - an exercise in oblivion
<"NGSTL1::EVANS%ti-eg.csnet"@CSNET-RELAY.ARPA>
Subject: reasons for making "artificial humans"
>In the last AI digest (V4 #226), Daniel Simon writes:
>
>>One question you haven't addressed is the relationship between
intelligence and
>>"human performance". Are the two synonymous? If so, why bother to make
>>artificial humans when making natural ones is so much easier (not to mention
>>more fun)?
>
>This is a question that has been bothering me for a while. When it is so much
>cheaper (and possible now, while true machine intelligence may be just a dream)
>why are we wasting time training machines when we could be training humans in-
>stead. The only reasons that I can see are that intelligent systems can be
made
>small enough and light enough to sit on bombs. Are there any other reasons?
>
>Daniel Paul
>
>danny%ngstl1%ti-eg@csnet-relay
First of all, I'd just like to comment that making natural humans may be easier
(and more fun) for men, but it's not necessarily so for women. It also seems
that once we get the procedure for "making artificial humans" down pat, it
would take less time and effort than making "natural" ones, a process which
currently requires at least twenty years (sometimes more or less).
Now to my real point - I can't see how training machines could be considered
a waste of time. There are thousands of useful but meaningless (and generally
menial) jobs which machines could do, freeing humans for more interesting
pursuits (making more humans, perhaps). Of more immediate concern, there are
many jobs of high risk - mining, construction work, deep-sea exploration and so
forth - in which machines, particularly intelligent machines, could assist.
Putting intelligent systems on bombs is a minor use, of immediate concern only
for its funding potentials. Debating the ethics of such use is a legitimate
topic, I suppose, but condemning all AI research on that basis is not.
Eleanor Evans
evans%ngstl1%ti-eg@csnet-relay
------------------------------
Date: 28 Oct 86 17:18:00 EST
From: walter roberson <WADLISP7%CARLETON.BITNET@WISCVM.WISC.EDU>
Subject: Mathematics, Humanity
Gilbert Cockton <mcvax!ukc!its63b!hwcs!aimmi!gilbert@seismo.css.gov>
recently wrote:
>This is contentious and smacks of modelling all learning procedures
>in terms of a single subject, i.e. mathematics. I can't think of a
>more horrible subject to model human understanding on, given the
>inhumanity of most mathematics!
The inhumanity of *most* mathematics? I would think that from the rest of
your message, what you would really claim is the inhumanity of *all*
mathematics -- for *all* of mathematics is entirely deviod of the questions
of what is morally right or morally wrong, entirely missing all matters of
human relationships. Mathematical theorems start by listing the assumptions,
and then indicating how those assumptions imply a result. Many humans seem
to devote their entire lifes to forcibly changing other people's assumptions
(and not always for the better!); most people don't seem to care about
this process. Mathematics, then, could be said to be the study of single
points, where "real life" requires that humans be able to adapt to a line
$or perhaps something even higher order.| And yet that does not render
mathematics "inhumane", for we humans must always react to the single point
that is "now", and we *do* employ mathematics to guide us in that reaction.
Thus, mathematics is not inhumane at all -- at worst, it is a subclass of
"humanity". If you prefer to think if it in such terms, this might be
expressed as " !! Humanity encompasses something Universal!"
Perhaps, though, there should be a category of study devoted to modelling
the transformation of knowledge as the very assumptions change. A difficult
question, of course, is whether such a study should attempt to, in any
way, model the "morality" of changing assumptions. I would venture that
it should not, but that a formal method of measuring the effects of such
changes would not be out of order.
-----
Gilbert, as far as I can tell, you have not presented anything new in your
article. Unless I misunderstand you completely, your entire arguement is based
upon the premise that there is something special about life that negates the
possibility of life being modelled by any formal system, no matter how
complex. As I personally consider that it might be possible to do such a
modelling $note that I don't say that it *is* possible to do such a modelling|,
I disregard the entire body of your arguements. The false premise implies
all conclusions.
-----
>Nearer to home, find me
>one computer programmer who's understanding is based 100% on formal procedures.
>Even the most formal programmers will be lucky to be in program-proving mode
>more than 60% of the time. So I take it that they don't `understand' what
>they're doing the other 40% of the time?
I'm not quite sure what you mean to imply by "program-proving mode". The
common use of the word "prove" would imply "a process of logically
demonstrating that an already-written program is correct". The older use of
"prove" would imply "a process of attempting to demonstrate that an already-
written program is incorrect." In either case, the most formal of programmers
spend relatively little time in "program-proving mode", as those programmers
employ formal systems to write programs which are correct in the first place.
It is only those that either do not understand programming, or do not
understand all the implications of the assumptions they have programmed, that
require 60% of their time to "prove" their programs. 60% of their time proving
to others the validity of the approach, perhaps...
walter roberson <WADLISP7@CARLETON.BITNET>
walter
------------------------------
End of AIList Digest
********************
∂05-Nov-86 0405 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #246
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 5 Nov 86 04:05:02 PST
Date: Wed 5 Nov 1986 00:00-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #246
To: AIList@SRI-STRIPE
AIList Digest Wednesday, 5 Nov 1986 Volume 4 : Issue 246
Today's Topics:
Philosophy - The Analog/Digital Distinction
----------------------------------------------------------------------
Date: 29 Oct 86 17:34:08 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: The Analog/Digital Distinction: Sol
Concerning the A/D distinction, goldfain@uiucuxe.CSO.UIUC.EDU replies:
> Analog devices/processes are best viewed as having a continuous possible
> range of values. (An interval of the real line, for example.)
> Digital devices/processes are best viewed as having an underlying
> granularity of discrete possible values.
> (Representable by a subset of the integers.)
> This is a pretty good definition, whether you like it or not.
> I am curious as to what kind of discussion you are hoping to get,
> when you rule out the correct distinction at the outset ...
Nothing is ruled out. If you follow the ongoing discussion, you'll see
what I meant by continuity and discreteness being "nonstarters." There
seem to be some basic problems with what these mean in the real
physical world. Where do you find formal continuity in physical
devices? And if it's only "approximate" continuity, then how is the
"exact/approximate" distinction that some are proposing for A/D going
to work? I'm not ruling out that these problems may be resolvable, and
that continuous/discrete will emerge as a coherent criterion after
all. I'm just suggesting that there are prima facie reasons for
thinking that the distinction has not yet been formulated coherently
by anyone. And I'm predicting that the discussion will be surprising,
even to those who thought they had a good, crisp, rigorous idea of
what the A/D distinction was.
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 29 Oct 86 16:37:55 GMT
From: rutgers!husc6!Diamond!aweinste@lll-crg.arpa (Anders Weinstein)
Subject: Re: The Analog/Digital Distinction
> From Stevan Harnad:
>
>> Analog signal -- one that is continuous both in time and amplitude.
>> ...
>> Digital signal -- one that is discrete both in time and amplitude...
>> This is obtained by quantizing a sampled signal.
>
> Question: What if the
>original "object" is discrete in the first place, both in space and
>time? Does that make a digital transformation of it "analog"? I
Engineers are of course free to use the words "analog" and "digital" in their
own way. However, I think that from a philosophical standpoint, no signal
should be regarded as INTRINSICALLY analog or digital; the distinction
depends crucially on how the signal in question functions in a
representational system. If a continuous signal is used to encode digital
data, the system ought to be regarded as digital.
I believe this is the case in MOST real digital systems, where quantum
mechanics is not relevant and the physical signals in question are best
understood as continuous ones. The actual signals are only approximated by
discontinous mathematical functions (e.g. a square wave).
> The image of an object
>(or of the analog image of an object) under a digital transformation
>is "approximate" rather than "exact." What is the difference between
>"approximate" and "exact"? Here I would like to interject a tentative
>candidate criterion of my own: I think it may have something to do with
>invertibility. A transformation from object to image is analog if (or
>>to the degree that) it is invertible. In a digital approximation, some
>information or structure is irretrievably lost (the transformation
>is not 1:1).
> ...
It's a mistake to assume that transformation from "continuous" to "discrete"
representations necessarily involves a loss of information. Lots of
continuous functions can be represented EXACTLY in digital form, by, for
example, encoded polynomials, differential equations, etc.
Anders Weinstein
------------------------------
Date: 29 Oct 86 20:28:06 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan
Harnad)
Subject: Re: The Analog/Digital Distinction
[Will someone with access post this on sci.electronics too, please?]
Anders Weinstein <princeton!cmcl2!harvard!DIAMOND.BBN.COM!aweinste>
has offered some interesting excerpts from the philosopher Nelson Goodman's
work on the A/D distinction. I suspect that some people will find Goodman's
considerations a little "dense," not to say hirsute, particularly
those hailing from, say, sci.electronics; I do too. One of the
subthemes here is whether or not engineers, cognitive psychologists
and philosophers are talking about the same thing when
they talk about A/D.
[Other relevant sources on A/D are Zenon Pylyshyn's book
"Computation and Cognition," John Haugeland's "Artificial
Intelligence" and David Lewis's 1971 article in Nous 5: 321-327,
entitled "Analog and Digital."]
First, some responses to Weinstein/Goodman on A/D; then some responses
to Weinstein-on-Harnad-on-Jacobs:
> systems like musical notation which are used to DEFINE a work of
> art by dividing the instances from the non-instances
I'd be reluctant to try to base a rigorous A/D distinction on the
ability to make THAT anterior distinction!
> "finitely differentiated," or "articulate." For every two characters
> K and K' and every mark m that does not belong to both, [the]
> determination that m does not belong to K or that m does not belong
> to K' is theoretically possible. ...
I'm skeptical that the A/D problem is perspicuously viewed as one of
notation, with, roughly, (1) the "digital notation" being all-or-none and
discrete and the "analog notation" failing to be, and with (2) corresponding
capacity or incapacity to discriminate among the objects they stand for.
> A scheme is syntactically dense if it provides for infinitely many
> characters so ordered that between each two there is a third.
I'm no mathematician, but it seems to me that this is not strong
enough for the continuity of the real number line. The rational
numbers are "syntactically dense" according to this definition. But
maybe you don't want real continuity...?
> semantic finite differentiation... for every two characters
> I and K' such that their compliance classes are not identical and [for]
> every object h that does not comply with both, [the] determination
> that h does not comply with K or that h does not comply with K' must
> be theoretically possible.
I hesitantly infer that the "semantics" concerns the relation between
the notational "image" (be it analog or digital) and the object it
stands for. (Could a distinction that so many people feel they have a
good intuitive handle on really require so much technical machinery to
set up? And are the different candidate technical formulations really
equivalent, and capturing the same intuitions and practices?)
> A symbol ←scheme← is analog if syntactically dense; a ←system← is
> analog if syntactically and semantically dense. ... A digital scheme,
> in contrast, is discontinuous throughout; and in a digital system the
> characters of such a scheme are one-one correlated with
> compliance-classes of a similarly discontinous set. But discontinuity,
> though implied by, does not imply differentiation...To be digital, a
> system must be not merely discontinuous but ←differentiated←
> throughout, syntactically and semantically...
Does anyone who understands this know whether it conforms to, say,
analog/sampled/quantized/digital distinctions offered by Steven Jacobs
in a prior iteration? Or the countability criterion suggested by Mitch
Sundt?
> If only thoroughly dense systems are analog, and only thoroughly
> differentiated ones are digital, many systems are of neither type.
How many? And which ones? And where does that leave us with our
distinction?
Weinstein's summary:
>>To summarize: when a dense language is used to represent a dense domain, the
>>system is analog; when a discrete (Goodman's "discontinuous") and articulate
>>language maps a discrete and articulate domain, the system is digital.
What about when a discrete language is used to represent a dense
domain (the more common case, I believe)? Or the problem case of a
dense representation of a discrete domain? And what if there are no dense
domains (in physical nature)? What if even the dense/dense criterion
can never be met? Is this all just APPROXIMATELY true? Then how does
that square with, say, Steve Jacobs again, on approximation?
--------
What follows is a response to Weinstein-on-Harnad-on-Jacobs:
> Engineers are of course free to use the words "analog" and "digital"
> in their own way. However, I think that from a philosophical
> standpoint, no signal should be regarded as INTRINSICALLY analog
> or digital; the distinction depends crucially on how the signal in
> question functions in a representational system. If a continuous signal
> is used to encode digital data, the system ought to be regarded as
> digital.
Agreed that an isolated signal's A or D status cannot be assigned, and
that it depends on its relation with other signals in the
"representational system" (whatever that is) and their relations to their
sources. It also depends, I should think, on what PROPERTIES of the signal
are carrying the information, and what properties of the source are
being preserved in the signal. If the signal is continuous, but its
continuity is not doing any work (has no signal value, so to speak),
then it is irrelevant. In practice this should not be a problem, since
continuity depends on a signal's relation to the rest of the signal
set. (If the only amplitudes transmitted are either very high or very
low, with nothing in between, then the continuity in between is beside
the point.) Similarly with the source: It may be continuous, but the
continuity may not be preserved, even by a continuous signal (the
continuities may not correlate in the right way). On the other hand, I
would want to leave open the question of whether or not discrete
sources can have analogs.
> I believe this is the case in MOST real digital systems, where
> quantum mechanics is not relevant and the physical signals in
> question are best understood as continuous ones. The actual signals
> are only approximated by discontinous mathematical functions (e.g.
> a square wave).
There seems to be a lot of ambiguity in the A/D discussion as to just
what is an approximation of what. On one view, a digital
representation is a discrete approximation to a continuous object (source)
or to a (continuous) analog representation of a (continuous) object
(source). But if all objects/sources are really discontinuous, then
it's really the continuous analog representation that's approximate!
Perhaps it's all a matter of scale, but then that would make the A/D
distinction very relative and scale-dependent.
> It's a mistake to assume that transformation from "continuous" to
> "discrete" representations necessarily involves a loss of information.
> Lots of continuous functions can be represented EXACTLY in digital
> form, by, for example, encoded polynomials, differential equations, etc.
The relation between physical implementations and (formal!) mathematical
idealizations also looms large in this discussion. I do not, for
example, understand how you can represent continuous functions digitally AND
exactly. I always thought it had to be done by finite difference
equations, hence only approximately. Nor can a digital computer do
real integration, only finite summation. Now the physical question is,
can even an ANALOG computer be said to be doing true integration if
physical processes are really discrete, or is it only doing an approximation
too? The only way I can imagine transforming continuous sources into
discrete signals is if the original continuity was never true
mathematical continuity in the first place. (After all, the
mathematical notion of an unextended "point," which underlies the
concept of formal continuity, is surely an idealization, as are many
of the infinitesmal and limiting notions of analysis.) The A/D
distinction seems to be dissolving in the face of all of these
awkward details...
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
End of AIList Digest
********************
∂05-Nov-86 0710 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #247
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 5 Nov 86 07:10:06 PST
Date: Wed 5 Nov 1986 00:03-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #247
To: AIList@SRI-STRIPE
AIList Digest Wednesday, 5 Nov 1986 Volume 4 : Issue 247
Today's Topics:
Philosophy - Defining the Analog/Digital Distinction
----------------------------------------------------------------------
Date: 30 Oct 86 02:16:42 GMT
From: rutgers!princeton!mind!harnad@SPAM.ISTC.SRI.COM (Stevan Harnad)
Subject: Re: Defining the Analog/Digital Distinction
[Could someone post this to sci.electronics, to which I have no access, please?]
------
(1)
ken@rochester.arpa writes:
> I think the distinction is simply this: digital deals with a finite set
> of discrete {voltage, current, whatever} levels, while analog deals
> with a *potentially* infinite set of levels. Now I know you are going
> to say that analog is discrete at the electron noise level but the
> circuits are built on the assumption that the spectrum is continuous.
> This leads to different mathematical analyses.
It sounds as if a problem of fact is being remedied by an assumption here.
Nor do potential infinities appear to remedy the problem; there are perfectly
discrete potential infinities. The A/D distinction is again looking
approximate, relative and scale-dependent, hence, in a sense, arbitrary.
> Sort of like infinite memory Turing machines, we don't have them but
> we program computers as if they had infinite memory and in practice
> as long as we don't run out, it's ok. So as long as we don't notice
> the noise in analog, it serves.
An approximation to an infinite rote memory represents no problem of
principle in computing theory and practice. But an approximation to an
exact distinction between the "exact" and the "approximate" doesn't seem
satisfactory. If there is an exact distinction underlying actual
engineering practice, at least, it would be useful to know what it
was, in place of intuitions that appear to break down as soon as they
are made precise.
--------
(2)
cuuxb!mwm (Marc Mengel) writes:
> Digital is essentially a subset of analog, where the range of
> properties used to represent information is grouped into a
> finite set of values...
> Analog, on the other hand, refers to using a property to directly
> represent an infinite range of values with a different infinite
> range of values.
This sounds right, as far as it goes. D may indeed be a subset of A.
To use the object--transformation--image vocabulary again: When an object
is transformed into an image with only finite values, then the transform
is digital. (What about combinations of image values?) When an
infinite-valued object is transformed into an infitinite-valued (and
presumably covariant) image, then the transform is analog. I assume
the infinities in question have the right cardinality (i.e.,
uncountable). Questions: (i) Do discrete objects, with only finite or
countably infinite properties, not qualify to have analogs? (ii) What does
"directly represent" mean? Is there something indirect about finiteness?
(iii) What if there are really no such infinities, physically, on either
the object end or the image end?
May I interject at this point the conjecture that what seems to be
left out of all these A/D considerations so far (not just this module)
is that discretization is usually not the sole means or end of digital
representation. What about symbolic representation? What turns a
discretized, approximate image of an object into a symbolic
representation, manipulable by formal rules and semantically
interpretable as being a representation OF that object? (But perhaps
this is getting a little ahead of ourselves.)
> This is why slide-rules are considered analog, you are USING distance
> rather than voltage, but you can INTERPRET a distance as precisely
> as you want. An abacus, on the otherhand also USES distance, but
> where a disk is MEANS either one thing or another, and it takes
> lots of disks to REPRESENT a number. An abacus then, is digital.
(No comment. Upper case added.)
--------
(3)
<bcsaic!ray> writes:
> (An) analog is a (partial) DUPLICATE (or abstraction)
> of some material thing or some process, which contains
> (it is hoped) the significant characteristics and properties
> of the original.
And a digital representation can't be any of these things? "Duplicate"
in what sense? An object's only "exact" double is itself. Once we move
off in time and space and properties, more precise notions of
"duplicate" are needed than the intuitive ones. Sharing the SAME
physical properties (e.g., obeying the same differential equations
[thanks to Si Kochen for that criterion])? Or perhaps just ANALOGS of
them? But then that gets a bit circular.
> A digital device or method operates on symbols, rather than
> physical (or other) reality. Analog computers may operate on
> (real) voltages and electron flow, while digital computers
> operate on symbols and their logical interrelationships.
On the face of it, digital computers "operate" on the same physical
properties and principles that other physical mechanisms do. What is
different is that some aspects of their operations are INTERPRETABLE
in special ways, namely, as rule-governed operations of symbol tokens
that STAND FOR something else. One of the burdens of this discussion
is to determine precisely what role the A/D distinction plays in that
phenomenon, and vice versa. What, to start with, is a symbol?
> Digital operations are formal; that is they treat form rather
> than content, and are therefore always deductive, while the
> behavior of real things and their analogs is not.
Unfortunately, however, these observations are themselves a bit too
informal. What is it to treat form rather than content? One candidate
that's in the air is that it is to manipulate symbols according to
certain formal rules that indicate what to do with the symbol tokens
on the basis of their physical shapes only, rather than what the tokens or
their manipulations or combinations "stand for" or "mean." It's not clear
that this definition is synonymous with symbol manipulation's always
being "deductive." Perhaps it's interpretable as performing deductions,
but as for BEING deductions, that's another question. And how can
digital operations stand in contrast to the behavior of "real things"?
Aren't computers real things?
> It is one of my (unpopular) assertions that the central nervous
> system of living organisms (including myself) is best understood
> as an analog of "reality"; that most interesting behavior
> such as induction and the detection of similarity (analogy and
> metaphor) cannot be accomplished with only symbolic, and
> therefore deductive, methods.
Such a conjecture would have to be supported not only by a clear
definition of all of the ambiguous theoretical concepts used
(including "analog"), but by reasons and evidence. On the face of it,
various symbol-manipulating devices in AI do do "induction" and "similarity
detection." As to the role of analog representation in the brain:
Perhaps we'd better come up with a viable literal formulation of the
A/D distinction; otherwise we will be restricted to figurative
assertions. (Talking too long about the analog tends to make one
lapse into analogy.)
--------
(4)
lanl!a.LANL.ARPA!crs (Charlie Sorsby) writes:
> It seems to me that the terms as they are *usually* used today
> are rather bastardized... when the two terms originated they referred
> to two ways of "computing" and *not* to kinds of circuits at all.
> The analog simulator (or, more popularly, analog computer) "computed"
> by analogy. And, old timers may recall, they weren't all electronic
> or even electrical.
But what does "compute by analogy" mean?
> Digital computers (truly so) on the other hand computed with
> *digits* (i.e. numbers). Of course there was (is) analogy involved
> here too but that was a "higher-order term" in the view and was
> conveniently ignored as higher order terms often are.
What is a "higher-order term"? And what's the difference between a
number and a symbol that's interpretable as a number? That sounds like
a "higher-order" consideration too.
> In the course of time, the term analog came to be used for those
> electronic circuits *like* those used in analog simulators (i.e.
> circuits that work with continuous quantities). And, of course,
> digital came to refer to those circuits *like* those used in digital
> computers (i.e. those which work with discrete or quantized quantities.
You guessed my next question: What does "like" mean, and why does
the underlying distinction correlate with continuous and discrete
circuit properties?
> Whether a quantity is continuous or discrete depends on such things
> as the attribute considered, to say nothing of the person doing the
> considering, hence the vagueness of definition and usage of the
> terms. This vagueness seems to have worsened with the passage of time.
I couldn't agree more. And an attempt to remedy that is one of the
objects of this exercise.
--------
(5)
sundt@mitre.ARPA writes:
> Coming from a heavily theoretical undergraduate physics background,
> it seems obvious that the ONLY distinction between the analog and
> digital representation is the enumerability of the relationships
> under the given representation.
> First of all, the form of digital representation must be split into
> two categories, that of a finite representation, and that of a
> countably infinite representation. Turing machines assume a countably
> infinite representation, whereas any physically realizable digital
> computer must inherently assume a finite digital representation.
> Second, there must be some predicate O(a,b) defined over all the a
> and b in the representation such that the predicate O(a,b) yields
> only one of a finite set of symbols, S(i) (e.g. "True/False").
> If such a predicate does not exist, then the representation is
> arguably ambiguous and the symbols are "meaningless".
> Looking at all the (a,b) pairs that map the O(a,b) predicate into
> the individual S(i):
> ANALOG REPRESENTATION: the (a,b) pairs cannot be enumerated for ALL S(i)
> COUNTABLY-INFINITE DIGITAL REPRESENTATION: the (a,b) pairs cannot be
> enumerated for ALL S(i).
> FINITE DIGITAL REPRESENTATION: all the (a,b) pairs for all the S(i)
> CAN be enumerated.
> This distinguishes the finite digital representation from the other two
> representations. I believe this is the distinction you were asking
> about. The distinction between the analog representation and the
> countably-infinite digital representation is harder to identify.
> I sense it would require the definition of a mapping M(a,b) onto the
> representation itself, and the study of how this mapping relates to
> the O(a,b) predicate. That is, is there some relationship between
> O(?,?), M(?,?) and the (a,b) that is analgous to divisibility in
> Z and R. How this would be formulated escapes me.
You seem to have here a viable formal definition of something
that can be called a "analog representation," based on the
formal notion of continuity and nondenumerability. The question seems to
remain, however, whether it is indeed THIS precise sense of
analog that engineers, cognitive psychologists and philosophers are
informally committed to, and, if so, whether it is indeed physically
realizable. It would be an odd sort of representation if it were only
an unimplementable abstraction. (Let me repeat that the finiteness of
physical computers is NOT an analogous impediment for turing-machine
theory, because the finite approximations continue to make sense,
whereas both the finite and the denumerably infinite approximation to
the A/D distinction seems to vitiate the distinction.)
It's not clear, by the way, that it wasn't in fact the (missing)
distinction between a countable and an uncountable "representation" that
would have filled the bill. But I'll assume, as you do, that some suitable
formal abstraction would capture it. THe question remains: Does that
capture our A/D intuitions too? And does it sort out all actual (physical)
A/D cases correctly?
--------
The rest of Mitch Sundt's reply pertains also to the
"Searle, Turing, Categories, Symbols" discussion that
is going on in parallel with this one:
> we can characterize when something is NOT intelligent,
> but are unable to define when it is.
I don't see at all why this is true, apart from the fact that
confirming or supporting an affirmation is always more open-ended
than confirming or supporting a denial.
> [Analogously] Any attempt to ["define chaos"] would give it a fixed
> structure, and therefore order... Thus, it is the quality that
> is lost when a signal is digitized to either a finite or a
> countably-infinite digital representation. Analog representations
> would not suffer this loss of chaos.
Maybe they wouldn't, if they existed as you defined them, and if chaos
were worth preserving. But I'm beginning to sense a gradual departure from
the precision of your earlier formal abstractions in the direction of
metaphor here...
> Carrying this thought back to "intelligence," intelligence is the
> quality that is lost when the behavior is categorized among a set
> of values. Thus, to detect intelligence, you must use analog
> representations (and meta-representations). And I am forced to
> conclude that the Turing test must always be inadequate in assessing
> intelligence, and that you need to be an intelligent being to
> *know* an intelligent being when you see one!
I think we have now moved from equating "analog" with a precise (though
not necessarily correct) formal notion to a rather free and subjective
analogy. I hope it's clear that the word "conclude" here does not have
quite the same deductive force it had in the earlier considerations.
> Thinking about it further, I would argue, in view of what I just
> said, that people are by construction only "faking" intelligence,
> and that we have achieved a complexity whereby we can percieve *some*
> of the chaos left by our crude categorizations (perhaps through
> multiple categorizations of the same phenomena), and that this
> perception itself gives us the appearance of intelligence. Our
> perceptions reveal only the tip of the chaotic iceberg, however,
> by definition. To have true intelligence would require the
> perception of *ALL* the chaos.
Thinking too much about the mind/body problem will do that to you
sometimes.
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
End of AIList Digest
********************
∂05-Nov-86 1055 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #248
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 5 Nov 86 10:55:10 PST
Date: Wed 5 Nov 1986 00:12-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #248
To: AIList@SRI-STRIPE
AIList Digest Wednesday, 5 Nov 1986 Volume 4 : Issue 248
Today's Topics:
Philosophy - The Analog/Digital Distinction
----------------------------------------------------------------------
Date: 3 Nov 86 05:37:31 GMT
From: allegra!princeton!mind!harnad@ucbvax.Berkeley.EDU (Stevan
Harnad)
Subject: Analog/Digital Distinction: 8 more replies
Here are 8 more contributions to the A/D Distinction from (1) M. Derthick,
(2) B. Garton, (3) W. Hamscher, (4) D. B. Plate, (5) R. Thau,
(6) B. Kuszmaul, (7) C. Timar and (8) A. Weinstein.
My comments follow the excerpts from each:
-----
(1) (mad@g.cs.cmu.edu> (Mark Derthick) writes:
> John Haugeland uses digital (and discrete) to mean "perfectly
> definite," which is, I think, the best that can be done. Thus
> representing an integer as the length of a stick in inches is digital,
> but using the length of the stick in angstroms isn't. Obviously there
> is a fuzzy boundary between the two. By the way, it is no problem that
> sticks can be 6.5" long, as long as there can be unambiguous cases.
Unfortunately, it is just this fuzzy boundary that is at issue here.
-----
(2) winnie!brad (Brad Garton) writes:
> A couple [of] items you mentioned in passing about the a/d issue struck
> some resonant chords in my mind. You hit the nail on the head for me
> when you said something about the a/d distinction possibly being a
> problem of scaling (I think you were replying to the idea of quantum
> effects at some level). When I consider the digitized versions of analog
> signals we deal with over here <computer music>, it seems that we
> approximate more and more closely the analog signal with the
> digital one as we increase the sampling rate. This process reminds
> me of Mandelbrot's original "How Long is the Coastline of Britain"
> article dealing with fractals. Perhaps "analog" could be thought
> of as the outer limit of some fractal set, with various "digital"
> representations being inner cutoffs. Don't know how useful this
> is, but I do like the idea of "analog" and "digital" being along
> some sort of continuum.
> You also posed a question about when an approximate image of something
> becomes a symbol of that thing (please forgive my awful paraphrasing).
> As you seemed to be hinting, this is indeed a pretty sticky and
> problematic issue. It always amazes me how quickly people are able
> to identify a sound as being artificial (synthesized) when the signal
> was intended to simulate a 'natural' instrument, rather than when
> the computer (or sunthesizer) was being used to explore some new
> timbral realm. Context sensitive? (and I haven't even mentioned yet
> the problems of signals in a "musical" phrase!).
As you may have been noticing from the variety of the responses, the
A/D distinction seems to look rather different from the standpoints of
hardware concerns, signal analysis, computational theory, AI,
robotics, cognitive modeling, physics, mathematics, philosophy and,
evidently, music synthesis. And that's without mentioning poets and
hermeneuts.
My question about fractals would be similar to my question about
continuity: Are they to be a LITERAL physical model? Or are they just
an abstraction, as I believe the proposals based on true continuity
and uncountability have so far been?
Or are what I'm tentatively rejecting as "abstractions" in fact standard
examples of nomological generalizations, like the ideal gas laws, perfect
elasticity, frictionless planes, etc.? [I'm inclined to think they're not,
because I don't think there is a valid counterpart, in the idealization of
continuity in these A/D formulations, to the physical notions of friction, etc..
The latter are what account for why it is that we never observe the idealized
pattern in physics (but only an approximation to it) and yet we (correctly)
continue to take the idealizations to be literally true of nature.]
-----
(3) hamscher@HT.AI.MIT.EDU (Walter Hamscher) replied as follows in mod.ai:
> I don't read all the messages on AiList, so I may have missed
> something here: but isn't ``analog vs digital'' the same thing as
> ``continuous vs discrete''? Continuous vs discrete, in turn, can be
> defined in terms of infinite vs finite partitionability. It's a
> property of the measuring system, not a property of the thing being
> measured.
If you sample some of the other responses you'll see that some people
think that something can be formally defined along those lines, but
whether it is indeed the A/D Distinction remains to be seen.
-----
(4) The next contribution, posted in sci.electronics by
plate@dicome.UUCP (Douglas B. Plate) is somewhat more lyrical:
> The complete workings of the universe are analog in nature,
> the growth of a leaf, decay of atomic structures, passing of
> electrons between atoms, etc. Analog is natural reality,
> even though facts about it's properties may remain unknown,
> the truth of ANALOG exists in an objective form.
> DIGITAL is an invention, like mathematics. It is a representation,
> and I will not make any asumptions about what it would represent
> except that whatever it represents, being a part of this Universe,
> would have the same properties and nature that all other things
> in the Universe share. The goal of DIGITAL then would be to
> represent things 100% accurately. I will not say that ANALOG is
> an infinitely continuous process, because I cannot prove that
> there is not a smallest possible element involved in an ANALOG
> process, however taking observed phenomena into account, I would
> risk to say that the smallest element of ANALOG have not been
> measured yet if they do ideed exist.
> Digital is finite only in the number of elements it uses to represent
> and the practical problem is that "bits" would have to extend
> into infinity or to a magnitude equalling the smallest element
> of what ANALOG is made of, for digital to reach it's full potential.
> The thing is, Analog has the "natural" advantage. The universe is
> made of it and what is only theory to DIGITAL is reality to
> ANALOG. The intrinsic goal of DIGITAL is to become like
> ANALOG. Why? Because DIGITAL "represents" and until it
> becomes like ANALOG in it's finity/infinity, all of it's
> representions can only be approximation.
> DIGITAL will forever be striving to attain what ANALOG
> was "born with". In theory, DIGITAL is just as continuously
> infinite as ANALOG, because an infinite number of bits could
> be used to represent an infinite number of things with 100%
> accuracy. In practice, ANALOG already has this "infinity"
> factor built into it and DIGITAL, like a dog chasing it's own
> tail, will be trying to catch up on into infinity.
This personification of "the analog" and "the digital" certainly
captures many peoples' intuitions, but unfortunately it remains
entirely at the intuitive level. Anthony Wilden wrote a book along
these lines that turned the analog and the digital into an undergraduate
cult for a few years, very much the way the left-brain/right-brain has been.
What I'm wondering whether this exercise can do is replace the
hermeneutics by a coherent, explicit, empirical construct with predictive
and explanatory power.
-----
(5) On sci.math,sci.physics,sci.electronics
rst@godot.think.com.UUCP (Robert Thau) replied as follows:
> In article <105@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>>"Preserving information under transformations" also sounds like a good
>>candidate... I would think that the invertibility of analog
>>transformations might be a better instance of information preservation than
>>the irretrievable losses of A/D.
> I'm not convinced. Common ways of transmitting analog signals all
> *do* lose at least some of the signal, irretrievably. Stray
> capacitance in an ordinary wire can distort a rapidly changing signal.
> Even fiber optic cables lose signal amplitude enough to require
> repeaters. Losses of information in processing analog signals tend to
> be worse, and for an analog transformation to be exactly invertible, it
> *must* preserve all the information in its input.
But then wouldn't it be fairest to say that, to the degree that a
signal FAILS to preserve its source's properties it is NOT an analog of it?
> ...The point is that the amount
> of information in the speakers' input which they lose, irretrievably,
> is a consequence of the design decisions of the people who made them.
> Such design decisions are as explicit as the number of bits used in a
> digital representation of the signal in the CD player farther up the
> pike. Either digital or analog systems can be made as "true" as you
> like, given enough time, materials, and money, but in neither case is
> perfection an option.
But then what becomes of the often-proposed property of
"approximateness" as a distinguisher of an analog representation from a
digital one, if they're BOTH approximate?
Thau closes by requoting me:
>>And this still seems to side-step the question of WHAT information is
>>preserved, and in what way, by analog and digital representations,
>>respectively.
to which he replies:
> Agreed.
I can't tell whether this is intended to be ironic or Thau is really
acknowledging the residual burden of specifying what it is in the way
the information is represented that makes it analog or digital, given
that the approximate/exact distinction seems to fail and the continuous/discrete
one seems to be either just an abstraction or fails to square with the physics.
-----
(6) On sci.electronics,sci.physics,sci.math
bradley@godot.think.com.UUCP (Bradley Kuszmaul) proposes the following
very relativistic account:
> The distinction between digital and analog is in our minds.
> "digital" and "analog" are just names of design methodologies that
> engineers use to build large systems. "digital" is not a property of
> a signal, or a machine, but rather a property of the design of the
> machine. The design of the machine may not be a part of the machine.
> If I gave you a music box (which played music naturally), and
> you might not be able to tell whether it was digital or analog (even
> if you could open it up and look at it and probe various things with
> oscilliscopes or other tools).
> Suppose I gave you a set of schematics for the box in which
> everything was described in terms of voltages and currents, and which
> included an explanation of how the box worked using continuous
> mathematical functions. The schematics might explain how various
> subcomponents interpreted their inputs as real numbers (even though
> the inputs might be a far cry from real numbers e.g. due to the
> quantization of everything by physicists). You would probably
> conclude that the music box was an analog device.
> Suppose, on the other hand, that I gave you a set of schematics for
> the same box in which all the subcomponents were described in terms of
> discrete formulas (e.g. truth tables), and included an explanation of
> how the inputs from reality are interpreted by the hardware as
> discrete values (even though the inputs might be a far cry from
> discrete values e.g. due to ``noise'' from the uncertainty of
> everything). You would probably conclude that the music box was a
> digital device.
> The idea is that a "digital" designer and "analog" designer might
> very well come up with the same hardware to solve some problem, but
> they would just understand the behaviour differently.
> If designers could handle the complexity of thinking about
> everything, they would not use any of these abstractions, but would
> just build hardware that works. Real designers, on the other hand,
> must control the complexity of the systems they design, and the
> "digital" and "analog" design methodologies control the complexity of
> the design while preserving enough of reality to allow the engineer to
> make progress.
If I understand correctly, Kuszmaul is suggesting that whether a
representation is analog or digital may just be a matter of
interpretation. (This calls to mind some Searlian issues about
"intrinsic" vs. "derived" intentionality.) I have some sympathy for
this view, because I myself have had reason to propose that the very same
"module" that is regarded as "digital" in its autonomous, stand-alone form,
might be regarded as analogue in a "dedicated" system, with all inputs and
outputs causally connected to the world, and hence all "interpretations"
fixed. Of course, that's just a matter of scale too, since ALL systems, whether
with or without human intermediaries and interpreters, are causally
connected to the world... But this does do violence to whatever is
guiding some people's intuitions on this, for they would claim that
THEIR notion of analogue is completely interpretation-independent. The
part of me that leans toward the invertibility/information-preserving
criterion sides with them.
> If you buy my idea that digital and analog literally are in our
> minds, rather than in the hardware, then the problem is not one of
> deciding whether some particular system is digital (such questions
> would be considered ill-posed). The real problem, as I view it, is to
> distinguish between the digital and analog design methodologies.
> We can try to understand the difference by looking at the cases
> where we would use one versus the other.
> We often use digital systems when the answer we want is a number.
> (such as the decimal expansion of PI to 1000 digits)
> We often use analog systems when the answer we want is something
> physical (I don't really have good examples. Many of the things
> which were traditionally analog are going digital for some of the
> reasons described below. e.g. music, pictures (still and moving),
> the control of an automobile engine or the laundry machine)
> Digital components are nice because they have specifications which
> are relatively straightforward to test. To test an analog
> component seems harder. Because they are easier to test,
> they can be considered more "uniform" than analog components (a
> TTL "OR" gate from one mfr is about the same as a TTL "OR" gate
> from another). (The same argument goes the other way too...)
> Analog components are nice because sometimes they do just what you
> wanted. For example, the connection from the gas peddle to the
> throttle on the carburator of a car can be made by a mechanical
> linkage which gives output which is a (approximately) continuous
> function of the input position. To "fly by wire" (i.e. to use a
> digital linkage) requires a lot more technology.
> (When I say "we use a digital system", I really mean that "we design
> such a system using a digital methodology", and correspondingly for
> the analog case)
> There are of course all sorts of places between "digital" and
> "analog". A system may have digital subsystems and analog subsystems
> and there may be analog subsystems inside the digital subsystems and
> it goes on and on. This sort of thing makes the decision about
> whether some particular design methodology is digital or analog hard.
I'll leave it to the A/D absolutists to defend against this extreme
relativism. I still feel agnostic. Except I do believe that the system
that will ultimately pass the Total Turing Test will be deeply hybrid
through-and-through -- and not just a concatenation of add-on analog and
digital modules either.
-----
(7) Cary Timar <watmath!watrose!cctimar> writes:
> A great deal of the problem with the definitions I've seen is a
> vagueness in describing what continuous and discrete sets are.
> The distinction does not lie in the size of the set. It is possible to
> form a discrete set of arbitrary cardinality - the set of all ordinals
> in the initial segment of the cardinal. This set will start with
> 0,1,2,3,... which most people agree is discrete.
> I would say that a space can be considered to be "discrete" if it is not
> regular, and "continuous" if it is normal. I hesitate to classify the
> spaces which are regular but not normal. Luckily, we seldom deal with
> models of computation using values taken from such a space.
> Actually, I should have looked all of this up before I mailed it, but
> I'm getting lazy. If you want to try to find mathematical definitions
> of discrete and continuous spaces, I would suggest starting from texts
> on Topology, especially Point-Set Topology. I wouldn't trust any one
> text to give an universally agreed on definition either...
Of course, if I believed it was just a matter that could be settled by
textbook definitions I would not have posed it for the Net. The issue
is not whether or not topologists have a coherent continuous/discrete
distinction but (among other things) whether that distinction (1)
corresponds to the A/D Distinction, (2) captures the intuitions, usage
and practice of the several disciplines puporting to use the
distinction and (3) conforms with physical nature.
-----
(8) aweinste@Diamond.BBN.COM (Anders Weinstein) replies on net.ai,net.cog-eng
to an earlier iteration about the philosopher Nelson Goodman's
formulation:
> Well you asked for a "precise" definition! Although Goodman's rigor
> may seem daunting, there are really only two main concepts to grasp:
> "density", which is familiar to many from mathematics, and
> "differentiation".
> Goodman mentions that the difference between continuity and density
> is immaterial for his purposes, since density is always sufficient to
> destroy differentiation (and hence "notationality" and "digitality" as
> well).
There seems to be some difference of opinion on this matter from the
continuity enthusiasts, although they all advocate precision and rigor...
> "Differentiation" pertains to our ability to make the necessary
> distinctions between elements. There are two sides to the requirement:
> "syntactic differentiation" requires that tokens belonging to distinct
> characters be at least theoretically discriminable; "semantic
> differentiation" requires that objects denoted by non-coextensive
> characters be theoretically discriminable as well.
> Objects fail to be even theoretically discriminable if they can be
> arbitrarily similar and still count as different.
Do you mean cases like 2 vs. 1.9999999..., or cases like 2 vs. 2 minus epsilon?
They both seem as if they could be either "theoretically
discriminable" or "theoretically indiscriminable," depending on the
theory.
> For example, consider a language consisting of straight marks such
> that marks differing in length by even the smallest fraction of an inch
> are stipulated to belong to different characters. This language is not
> finitely differentiated in Goodman's sense. If, however, we decree
> that all marks between 1 and 2 inches long belong to one character, all
> marks between 3 and 4 inches long belong to another, all marks between
> 5 and 6 inches long belong to another, and so on, then the language
> WILL qualify as differentiated.
> The upshot of Goodman's requirement is that if a symbol system is to
> count as "digital" (or as "notational"), there must be some finite
> sized "gaps", however minute, between the distinct elements that need
> to be distinguished.
> Some examples:... musical notation [vs]... [an unquantized] scale
> drawing of a building
> To quote Goodman:
> "Consider an ordinary watch without a second hand. The hour-hand is
> normally used to pick out one of twelve divisions of the half-day.
> It speaks notationally [and digitally -- AW]. So does the minute hand
> if used only to pick out one of sixty divisions of the hour; but if
> the absolute distance of the minute hand beyond the preceding mark is
> taken as indicating the absolute time elapsed since that mark was
> passed, the symbol system is non-notational. Of course, if we set
> some limit -- whether of a half minute or one second or less -- upon
> the fineness of judgment so to be made, the scheme here too may
> become notational."
So apparently it does not matter whether the watch is in fact an
"analog" or "digital" watch (according to someone else's definition);
according to Goodman's the critical factor is how it's used.
> I'm still thinking about your question of how Goodman's distinction
> relates to the intuitive notion as employed by engineers or
> cognitivists and will reply later.
Please be sure to take into consideration the heterogenous sample of
replies and rival intuitions this challenge has elicited from these
various disciplines.
--
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
End of AIList Digest
********************
∂05-Nov-86 1423 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #249
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 5 Nov 86 14:22:49 PST
Date: Wed 5 Nov 1986 00:16-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #249
To: AIList@SRI-STRIPE
AIList Digest Wednesday, 5 Nov 1986 Volume 4 : Issue 249
Today's Topics:
Philosophy - The Analog/Digital Distinction
----------------------------------------------------------------------
Date: 31 Oct 86 02:45:56 GMT
From: husc6!Diamond!aweinste@think.com (Anders Weinstein)
Subject: Re: The Analog/Digital Distinction
In article <20@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
> I suspect that some people will find Goodman's
>considerations a little "dense," not to say hirsute, ...
Well you asked for a "precise" definition! Although Goodman's rigor may seem
daunting, there are really only two main concepts to grasp: "density", which
is familiar to many from mathematics, and "differentiation".
>> A scheme is syntactically dense if it provides for infinitely many
>> characters so ordered that between each two there is a third.
>
>I'm no mathematician, but it seems to me that this is not strong
>enough for the continuity of the real number line. The rational
>numbers are "syntactically dense" according to this definition. But
>maybe you don't want real continuity...?
Quite right. Goodman mentions that the difference between continuity and
density is immaterial for his purposes, since density is always sufficient to
destroy differentiation (and hence "notationality" and "digitality" as
well).
"Differentiation" pertains to our ability to make the necessary distinctions
between elements. There are two sides to the requirement: "syntactic
differentiation" requires that tokens belonging to distinct characters be at
least theoretically discriminable; "semantic differentiation" requires that
objects denoted by non-coextensive characters be theoretically discriminable
as well.
Objects fail to be even theoretically discriminable if they can be
arbitrarily similar and still count as different. For example, consider a
language consisting of straight marks such that marks differing in length by
even the smallest fraction of an inch are stipulated to belong to different
characters. This language is not finitely differentiated in Goodman's sense.
If, however, we decree that all marks between 1 and 2 inches long belong to
one character, all marks between 3 and 4 inches long belong to another, all
marks between 5 and 6 inches long belong to another, and so on, then the
language WILL qualify as differentiated.
The upshot of Goodman's requirement is that if a symbol system is to count as
"digital" (or as "notational"), there must be some finite sized "gaps",
however minute, between the distinct elements that need to be distinguished.
Some examples:
A score in musical notation can, if certain conventions are adopted, be
regarded as a digital representation, with the score denoting any performance
that complies with it. Note that although musical pitches, say, may take on
a continuous range of values, once we adopt some conventions about how much
variation in pitch is to be tolerated among the compliants of each note, the
set of note extensions can become finitely differentiated.
A scale drawing of a building, on the other hand, usually functions as an
analog representation: any difference in a line's length, however fine, is
regarded as denoting a corresponding difference in the building's size. If we
decide to interpret the drawing in some "quantized" way, however, then it can
be a digital representation.
To quote Goodman:
Consider an ordinary watch without a second hand. The hour-hand is
normally used to pick out one of twelve divisions of the half-day.
It speaks notationally [and digitally -- AW]. So does the minute hand
if used only to pick out one of sixty divisions of the hour; but if
the absolute distance of the minute hand beyond the preceding mark is
taken as indicating the absolute time elapsed since that mark was passed,
the symbol system is non-notational. Of course, if we set some limit --
whether of a half minute or one second or less -- upon the fineness of
judgment so to be made, the scheme here too may become notational.
I'm still thinking about your question of how Goodman's distinction relates
to the intuitive notion as employed by engineers or cognitivists and will
reply later.
Anders Weinstein <aweinste@DIAMOND.BBN.COM>
------------------------------
Date: 26 Oct 86 16:37:53 GMT
From: rutgers!princeton!rocksvax!oswego!dl@spam.ISTC.SRI.COM (Doug
Lea)
Subject: Re: The Analog/Digital Distinction: Soliciting Definitions
re: The analog/digital distinction
First, I propose a simple ground-rule. Let's assume that the "world"
somehow really is "discrete", that is, time, energy, mass, etc., all
come in little quanta. Given this, the differences between analog and
digital processes seem forced to lie in the nature of representations,
algorithms to manipulate them, and the relations of both to actual
quantities "out there".
I offer a very simple example to illustrate some possibilities. It is
intended to be somewhat removed from the sorts of interesting problems
encountered in distinguishing analog from digital mental processes.
Consider different approaches to determining population growth, given
this grossly simplistic model: an initial population, P, a time period in
question, T, (expressed in time quanta), and a "growth rate", R, the
number of quanta between the times that each member of this (asexual)
population gives birth to a new member (supposing that no more than
one birth per quantum is possible and no deaths).
Approach 1: (digital)
Simulate this process with an O(PT) algorithm, repeating T times a
scan across each member of the population, determining whether it gave
birth, and if so, adding a new member. If the population actually does
grow in this fashion, then the result is surely correct, as one might
verify by mapping the representation of the P individuals to real
individuals at time 0, and again at time T. Several efficiency
improvements to this algorithm are, of course, possible.
Approach 2: (analog)
An alternative method may be constructed by first noting that both
population size and time have very simple properties with respect
to this process. For purposes of the problem at hand, the difference
between the state of having a population of size N and one of size N+1
lies only in the difference between N and N+1. Similarly with time. To
develop an algorithm capitalizing on this, construct a nearly
equivalent problem in which population states differ only according to
the difference between N and N+epsilon, for any epsilon. Now, we know
that if epsilon is infinitessimally small, we can exploit the
differential and integral calculus to derive an exponential function
describing this process, and compute the population value at time T
with one swift calculation. Of course, the answer isn't right: we
solved a different problem! But it is close, and methods exist to
determine just how close this approximation will be in specific
instances. We may even be able to apply particular corrections.
Approach 3: (digital)
Use techniques developed for difference equations and recurrence
relations to come up with an exact answer requiring nearly as little
calculation as in the analog approach.
Approach 4: (digital?)
Place P cents in a bank account with compound interest rate
corresponding to R, and then see how much money you have at time T.
Approach 5: (analog)
Build a RLC integrating circuit with the right parameters. Apply
input voltage P and measure the output voltage at time T.
Approach 6: (analog?)
Observe some process with an exponential probability distribution
of events. Apply lots of transformations to get an answer.
There are probably many other interesting approaches, but I'll
leave it there.
Morals:
1. The notion of "analogy" or simulation does not do much to
distinguish analog from digital processing. Perhaps due to the nature
of our physical world, often there do seem to be more and better
analog analogies than digital analogies for many problems.
2. Speed of calculation also seems secondary. For example, the
calculus allows manipulation of representations involving infinite
numbers of states with a single calculation. But some digital methods
are fast too. Similarly with the fact that analog methods sometimes
allow compact representations (with single numbers and simple well
behaved functions representing entire problems). But one could
probably match, one-for-one, problems in which analog and digital
approaches were superior with respect to these attributes. This all
just amounts to acknowledging that the choice between ANY two
algorithms ought to take computational efficiency into account. And,
of course, the notion of "symbolic" vs. "non-symbolic" processing
plays no role here. All of the above approaches were symbolic in one
way or another.
3. The notion of approximation seems to be the most helpful one.
Again, for example, processing that involves implicit or explicit use
of the calculus can ONLY (given the above ground-rule) provide
approximations. Most such processing should probably be considered
analog. However, the usual conceptualization of approximation in
current use doesn't seem good enough. There are many digital
"heuristic" algorithms that are labelled as "approximations". (Worse,
discrete computational techniques for numerically solving "analytic"
problems like integration are also labelled "approximations" in nearly
a reverse sense.) For example, the nearest-neighbor heuristic is
considered as an approximation algorithm for the travelling
salesperson problem. But this seems to be a different sort of
approximation than using exponential equations to solve population
problems.
I'm not at all sure how to go about dealing with such
distinctions. Considerations of the robustness and the arbitrary level
of precision for approximations in the first sense might be useful,
but aren't the whole story: For example, several clearly digital
heuristics also have these properties (see, e.g., Karp's travelling
saleperson heuristic), but in somewhat different (e.g., probabalistic)
contexts. See J. Pearl's "Heuristics" book for related discussions.
Doug Lea
Computer Science
SUNY Oswego
Oswego, NY 13126
seismo!rochester!rocksvax!oswego!dl
------------------------------
Date: 4 Nov 86 01:55:22 GMT
From: rutgers!husc6!Diamond!aweinste@SPAM.ISTC.SRI.COM (Anders
Weinstein)
Subject: Re: Analog/Digital Distinction: 8 more replies
>Stevan Harnad:
>
>> Goodman mentions that the difference between continuity and density
>> is immaterial for his purposes, since density is always sufficient to
>> destroy differentiation (and hence "notationality" and "digitality" as
>> well).
>
>There seems to be some difference of opinion on this matter from the
>continuity enthusiasts, although they all advocate precision and rigor...
I don't believe there's any major difference here. The respondants who
require "continuity" are thinking only in terms of physics, where you don't
encounter magnitudes with dense but non-continuous ranges. Goodman deals with
other, artificially constructed symbol systems as well. In these we can, by
fiat, obtain a scheme that is dense but non-continuous. I think that
representation in such a scheme would fit most people's intuitive sense of
"analog-icity" if they thought about it.
>> Objects fail to be even theoretically discriminable if they can be
>> arbitrarily similar and still count as different.
>
>Do you mean cases like 2 vs. 1.9999999..., or cases like 2 vs. 2 minus epsilon?
>They both seem as if they could be either "theoretically
>discriminable" or "theoretically indiscriminable," depending on the
>theory.
I'm not sure what you mean here. I don't see how a length of 2 inches would
count as "theoretically discriminable" from a length of 1.999... inches; nor
is a length of 2 inches theoretically discriminable from a length of 2 minus
epision inches if epsilon is allowed to be arbitrarily small. On the other
hand, a length of 2 inches IS theoretically discriminable from a length of
1.9 inches.
In his examples, Goodman rules out cases where no measurement of any finite
degree of precision would be sufficient to make the requisite distinctions.
>> "Consider an ordinary watch without a second hand. The hour-hand is
>> normally used to pick out one of twelve divisions of the half-day.
>> It speaks notationally [and digitally -- AW]. So does the minute hand
>> if used only to pick out one of sixty divisions of the hour; but if
>> the absolute distance of the minute hand beyond the preceding mark is
>> taken as indicating the absolute time elapsed since that mark was
>> passed, the symbol system is non-notational. Of course, if we set
>> some limit -- whether of a half minute or one second or less -- upon
>> the fineness of judgment so to be made, the scheme here too may
>> become notational."
>
>So apparently it does not matter whether the watch is in fact an
>"analog" or "digital" watch (according to someone else's definition);
>according to Goodman's the critical factor is how it's used.
Right. Remember, Goodman is not talking about whether this is what an
engineer would class as an analog or digital WATCH (ie. in its internal
workings); he's ONLY talking about the symbol system used to represent the
time to the viewer. And he's totally relativistic here -- whether the
representation is analog or digital depends entirely on how it is to be
read.
------------------------------
Date: Tue, 4 Nov 86 17:10:39 pst
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: analog/digital distinction
Here's a quick shot at an A/D distinction.
The problem with the rationals was that the ordering and the operations
are easily translatable into computations on the natural numbers.
So, the proposal is:
DIGITAL: computations on a structure S that is recursively
isomorphic to a definable fragment of Peano Arithmetic.
ANALOG: computations on a dense structure that
is not recursively isomorphic to a definable fragment of Peano
Arithmetic.
Note there can be computations which are neither analog nor
digital according to this definition.
The rationale for this choice depends on two considerations.
(1) One must not be able to transform one kind of computation
into the other, which can be done only if there is a machine
(aka recursive function) that can do it.
(2) The distinction must not collapse in the face of the
possibility that physics will tell us the world is
fundamentally discrete (or fundamentally continuous), since
if Gerald Holton is to be believed, physical science has
been wavering between one and the other for thousands of years.
So the discrete/continuous nature of nature can be regarded
as a metaphysical issue, and we want to finesse this in our
definition to make it physically realistic.
I chose Peano Arithmetic as the base structure because it is
intuitively discrete, and all the digital structures that have
been proposed fit the criterion that they can be recursively
mapped into simple discrete arithmetic.
The density-of-values criterion for analog computation seems
intuitively plausible, and if one wants to make the distinction
between analog and digital into a feature of the world, not merely
of the representation chosen, one needs to assure consideration
(1) above.
If quantum physics ultimately tells us that the world is discrete,
there is no reason to assume that the discreteness in the world
will provide us with recursive functions mapping that discreteness
into the natural numbers, so analog computations will survive that
discovery.
Peter Ladkin
ladkin@kestrel.arpa
------------------------------
End of AIList Digest
********************
∂07-Nov-86 1725 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #250
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 7 Nov 86 17:24:32 PST
Date: Wed 5 Nov 1986 21:27-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #250
To: AIList@SRI-STRIPE
AIList Digest Thursday, 6 Nov 1986 Volume 4 : Issue 250
Today's Topics:
Philosophy - The Analog/Digital Distinction & Information
----------------------------------------------------------------------
Date: 29 Oct 86 16:29:10 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan
Harnad)
Subject: The A/D Distinction: 5 More Replies
[This message actually fits into the middle of the sequence I sent
yesterday. Sorry for the reversal. -- KIL]
Here are 5 more replies I've received on the A/D distinction. I'll
respond in a later module. [Meantime, could someone post this to
sci.electronics, to which I have no access, please?]
------
(1)
Message-Id: <8610271622.11564@ur-seneca.arpa>
In-Reply-To: <13@mind.UUCP>
U of Rochester, CS Dept, Rochester, NY
ken@rochester.arpa
CS Dept., U. of Roch., NY 14627.
Mon, 27 Oct 86 11:22:10 -0500
I think the distinction is simply this: digital deals with a finite set
of discrete {voltage, current, whatever} levels, while analog deals
with a *potentially* infinite set of levels. Now I know you are going
to say that analog is discrete at the electron noise level but the
circuits are built on the assumption that the spectrum is continuous.
This leads to different mathematical analyses.
Sort of like infinite memory Turing machines, we don't have them but we
program computers as if they had infinite memory and in practice as
long as we don't run out, it's ok. So as long as we don't notice the
noise in analog, it serves.
--------
(2)
Tue, 28 Oct 86 20:56:36 est
cuuxb!mwm
AT&T-IS, Software Support, Lisle IL
In article <7@mind.UUCP> you write:
>The ground-rules are these: Try to propose a clear and
>objective definition of the analog/digital distinction that is not
>arbitrary, relative, a matter of degree, or loses in the limit the
>intuitive distinction it was intended to capture.
>
>One prima facie non-starter: "continuous" vs. "discrete" physical
>processes.
>
>Stevan Harnad (princeton!mind!harnad)
Analog and digital are two ways of *representing* information. A
computer can be said to be analog or digital (or both!) depending
upon how the information is represented within the machine, and
particularly, how the information is represented when actual
computation takes place.
Digital is essentially a subset of analog, where the range of
properties used to represent information is grouped into a
finite set of values. For example, the classic TTL digital
model uses electrical voltage to represent values, and is
grouped into the following:
above +5 volts -- not used
+2..+5 volts (approx) -- a binary 1
0..+2 volts (approx) -- a binary 0
negatvie voltage -- not used.
Important to distinguish here is the grouping of the essentially
infinite possiblities of voltage into a finite set of values.
A system that used 4 voltage ranges to represent a base 4 number
system would still be digital. Note that this means that it
takes several voltages to represent an arbitrarily precise number
Analog, on the other hand, refers to using a property to directly
represent an infinite range of values with a different infinite
range of values: for example representing the number 15 with
15 volts, and the number 100 with 100 volts. Note that this means
it takes 1 voltage to represent an arbitrarily precise number.
This is my pot-shot at defining analog/digital and how they relate,
and how they are used in most systems i am familiar with. I think
these make reasonably clear what it is that "analog to digital"
converters (and "digital to analog") do.
This is why slide-rules are considered analog, you are using distance
rather than voltage, but you can interpret a distance as precisely
as you want. An abacus, on the otherhand also uses distance, but
where a disk is means either one thing or another, and it takes
lots of disks to represent a number. An abacus then, is digital.
Marc Mengel
...!ihnp4!cuuxb!mwm
--------
(3)
<bcsaic!ray>
Thu, 23 Oct 86 13:10:47 pdt
Message-Id: <8610232010.AA18462@bcsaic.LOCAL>
Try this:
(An) analog is a (partial) DUPLICATE (or abstraction)
of some material thing or some process, which contains
(it is hoped) the significant characteristics and properties
of the original. An analog is driven by situations and events
outside itself, and its usefulness is that the analog may be
observed and, via induction, the original understood.
A digital device or method operates on symbols, rather than
physical (or other) reality. Analog computers may operate on
(real) voltages and electron flow, while digital computers
operate on symbols and their logical interrelationships.
Digital operations are formal; that is they treat form rather
than content, and are therefore always deductive, while the
behavior of real things and their analogs is not. (Heresy follows).
It is one of my (unpopular) assertions that the central nervous
system of living organisms (including myself) is best understood
as an analog of "reality"; that most interesting behavior
such as induction and the detection of similarity (analogy and
metaphor) cannot be accomplished with only symbolic, and
therefore deductive, methods.
--------
(4)
Mon, 27 Oct 86 16:04:36 mst
lanl!a.LANL.ARPA!crs (Charlie Sorsby)
Message-Id: <8610272304.AA25429@a.ARPA>
References: <7@mind.UUCP> <45900003@orstcs.UUCP>, <13@mind.UUCP>
Stevan,
I've been more or less following your query and the resulting articles.
It seems to me that the terms as they are *usually* used today are rather
bastardized. Don't you think that when the two terms originated they
referred to two ways of "computing" and *not* to kinds of circuits at all.
The analog simulator (or, more popularly, analog computer) "computed" by
analogy. And, old timers may recall, they weren't all electronic or even
electrical. I vaguely recall reading about an analog simultaneous
linear-equation solver that comprised plates (rectangular, I think), cables
and pulleys.
Digital computers (truly so) on the other hand computed with *digits* (i.e.
numbers). Of course there was (is) analogy involved here too but that was
a "higher-order term" in the view and was conveniently ignored as higher
order terms often are.
In the course of time, the term analog came to be used for those
electronic circuits *like* those used in analog simulators (i.e. circuits
that work with continuous quantities). And, of course, digital came to
refer to those circuits *like* those used in digital computers (i.e. those
which work with discrete or quantized quantities.
Whether a quantity is continuous or discrete depends on such things as the
attribute considered to say nothing of the person doing the considering,
hence the vagueness of definition and usage of the terms. This vagueness
seems to have worsened with the passage of time.
Best regards,
Charlie Sorsby
...!{cmcl2,ihnp4,...}!lanl!crs
crs@lanl.arpa
--------
(5)
Message-Id: <8610280022.AA16966@mitre.ARPA>
Organization: The MITRE Corp., Washington, D.C.
sundt@mitre.ARPA
Date: Mon, 27 Oct 86 19:22:21 -0500
Having read your messages for the last few months, I
couldn't help but take a stab on this issue.
Coming from a heavily theoretical undergraduate physics background,
it seems obvious that the ONLY distinction between the analog and
digital representation is the enumerability of the relationships
under the given representation.
First of all, the form of digital representation must be split into
two categories, that of a finite representation, and that of a
countably infinite representation. Turing machines assume a countably
infinite representation, whereas any physically realizable digital computer
must inherently assume a finite digital representation (be it ever so large).
Thus, we have three distinctions to make:
1) Analog / Finite Digital
2) Countably-Infinite Digital / Finite Digital
3) Analog / Countably-Infinite Digital
Second, there must be some predicate O(a,b) defined over all the a and b
in the representation such that the predicate O(a,b) yields only one of
a finite set of symbols, S(i) (e.g. "True/False"). If such a predicate does
not exist, then the representation is arguably ambiguous and the symbols are
"meaningless".
An example of an O(a,b) is the equality predicate over the reals, integers,
etc.
Looking at all the (a,b) pairs that map the O(a,b) predicate into the
individual S(i), note that the following is true:
ANALOG REPRESENTATION: the (a,b) pairs cannot be enumerated for ALL
S(i).
COUNTABLY-INFINITE DIGITAL REPRESENTATION: the (a,b) pairs cannot be
enumerated for ALL S(i).
FINITE DIGITAL REPRESENTATION: all the (a,b) pairs for all the S(i)
CAN be enumerated.
This distinguishes the finite digital representation from the other two
representations. I believe this is the distinction you were asking about.
The distinction between the analog representation and the countably-infinite
digital representation is harder to identify. I sense it would require
the definition of a mapping M(a,b) onto the representation itself, and
the study of how this mapping relates to the O(a,b) predicate.
That is, is there some relationship between O(?,?), M(?,?) and the (a,b)
that is analgous to divisibility in Z and R. How this would be formulated
escapes me.
On your other-minds problem:
[see "Searle, Turing, Categories, Symbols"]
I think the issue here is related to the above classification. In particular,
I think the point to be made is that we can characterize when something is
NOT intelligent, but are unable to define when it is.
A less controversial issue would be to "Define chaos". Any attempt to do so
would give it a fixed structure, and therefore order. Thus, we can only
define chaos in terms of what it isn't, i.e. "Chaos is anything that cannot
be categorized."
Thus, it is the quality that is lost when a signal is digitized to either a
finite or an countably-infinite digital representation.
Analog representations would not suffer this loss of chaos.
Carrying this thought back to "intelligence," intelligence is the quality that
is lost when the behavior is categorized among a set of values. Thus, to
detect intelligence, you must use analog representations ( and
meta-representations). And I am forced to conclude that the Turing test must
always be inadequate in assessing intelligence, and that you need to be an
intelligent being to *know* an intelligent being when you see one!!!
Of course, there is much error in categorizations like this, so in the *real*
world, a countably-infinite digital representation might be *O.K.*.
I wholy agree with your arguement for a basing of symbols on observables,
and would also argue that semantic content is purely a result of a rich
syntactic structure with only a few primitive predicates, such as set
relations, ordering relations, etc.
Thinking about it further, I would argue, in view of what I just said, that
people are by construction only "faking" intelligence, and that we have
achieved a complexity whereby we can percieve *some* of the chaos left
by our crude categorizations (perhaps through multiple categorizations of
the same phenomena), and that this perception itself gives us the appearance
of intelligence. Our perceptions reveal only the tip of the chaotic iceberg,
however, by definition. To have true intelligence would require the perception
of *ALL* the chaos.
I hope you found this entertaining, and am anxious to hear your response.
Mitch Sundt The MITRE Corp. sundt@mitre.arpa
------------------------------
Date: 3 Nov 86 23:40:28 GMT
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: Re: The Analog/Digital Distinction
(weinstein quoting goodman)
> > A scheme is syntactically dense if it provides for infinitely many
> > characters so ordered that between each two there is a third.
(harnad)
> I'm no mathematician, but it seems to me that this is not strong
> enough for the continuity of the real number line. The rational
> numbers are "syntactically dense" according to this definition.
Correct. There is no first-order way of defining the
real number line without introducing something like countably
infinite sequences and limits as primitives.
Moreover, if this is done in a countable language, you are
guaranteed that there is a countable model (if the definition
isn't contradictory). Since the real line isn't countable,
the definition cannot ensure you get the REAL reals.
Weinstein wants to identify *analog* with *syntactically dense*
plus some other conditions. Harnad observes that the rationals
fit the notion of syntactic density.
The rationals are, up to isomorphism, the only countable, dense,
linear order without endpoints. So any syntactically dense scheme
fitting this description is (isomorphic to) the rationals,
or a subinterval of the rationals (if left-closed, right-closed,
or both-closed at the ends).
One consequence is that one could define such an *analog* system
from a *digital* one by the following method:
Use the well-known way of defining the rationals from the
integers - rationals are pairs (a,b) of integers,
and (a,b) is *equivalent* to (c,d) iff a.d = b.c.
The *equivalence* classes are just the rationals, and
they are semantically dense under the ordering
(a,b) < (c,d) iff there is (f,g) such that f,g have
the same sign and (a,b) + (f,g) = (c,d)
where (a,b) + (c,d) = (ad + bc, bd), and the + is factored
through the equivalence.
We may be committed to this kind of phenomenon, since every
plausible suggested definition must have a countable model,
unless we include principles about non-countable sets that
are independent of set theory. And I conjecture that every
suggestion with a countable model is going to be straightforwardly
obtainable from the integers, as the above example was.
Peter Ladkin
ladkin@kestrel.arpa
------------------------------
Date: 3 Nov 86 23:47:34 GMT
From: ladkin@kestrel.ARPA (Peter Ladkin)
Subject: Re: The Analog/Digital Distinction
In article <1701@Diamond.BBN.COM>, aweinste@Diamond.BBN.COM
(Anders Weinstein) writes:
> The upshot of Goodman's requirement is that if a symbol system is to count as
> "digital" (or as "notational"), there must be some finite sized "gaps",
> however minute, between the distinct elements that need to be distinguished.
I'm not sure you want this definition of the distinction.
There are *finite-sized gaps, however minute* between rational
numbers, and if we use the pairs-of-integers representation to
represent the syntactically dense scheme, (which must be
isomorphic to some subrange of the rationals if countable)
we may use the integers and their gaps to distinguish the gaps
in the syntactically dense scheme, in a quantifier-free manner.
Thus syntactically dense schemes would count as *digital*, too.
Peter Ladkin
ladkin@kestrel.arpa
------------------------------
Date: 4 Nov 86 19:03:09 GMT
From: nsc!amdahl!apple!turk@hplabs.hp.com (Ken "Turk" Turkowski)
Subject: Re: Analog/Digital Distinction
In article <116@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>(2) winnie!brad (Brad Garton) writes:
>> ... When I consider the digitized versions of analog
>> signals we deal with over here <computer music>, it seems that we
>> approximate more and more closely the analog signal with the
>> digital one as we increase the sampling rate.
There is a difference between sampled signals and digital signals. A digital
signals is not only sampled, but is also quantized. One can have an analog
sampled signal, as with CCD filters.
As a practical consideration, all analog signals are band-limited. By the
Sampling Theorem, there is a sampling rate at which a bandlimited signal can
be perfectly reconstructed. *Increasing the sampling rate beyond this
"Nyquist rate" cannot result in higher fidelity*.
What can affect the fidelity, however, is the quantization of the samples:
the more bits used to represent each sample, the more accurately the signal
is represented.
This brings us to the subject of Signal Theory. A particular class of signal
that is both time- and band-limited (all real-world signals) can be represented
by a linear combination
of a finite number of basis functions. This is related to the dimensionality
of the signal, which is approximately 2WT, where W is the bandwidth of the
signal, and T is the duration of the signal.
>> ... This process reminds
>> me of Mandelbrot's original "How Long is the Coastline of Britain"
>> article dealing with fractals. Perhaps "analog" could be thought
>> of as the outer limit of some fractal set, with various "digital"
>> representations being inner cutoffs.
Fractals have a 1/f frequency distribution, and hence are not band-limited.
>> In article <105@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>> I'm not convinced. Common ways of transmitting analog signals all
>> *do* lose at least some of the signal, irretrievably...
Let's not forget noise. It is impossible to keep noise out of analog channels
and signal processing, but it can be removed in digital channels and can be
controlled (roundoff errors) in digital signal processing.
>> ... Losses of information in processing analog signals tend to
>> be worse, and for an analog transformation to be exactly invertible, it
>> *must* preserve all the information in its input.
Including the exclusion of noise. Once noise is introduced, the signal cannot
be exactly inverted.
--
Ken Turkowski @ Apple Computer, Inc., Cupertino, CA
UUCP: {sun,nsc}!apple!turk
CSNET: turk@Apple.CSNET
ARPA: turk%Apple@csnet-relay.ARPA
------------------------------
Date: Wed 5 Nov 86 21:03:42-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Subject: Information in Signals
From: nsc!amdahl!apple!turk@hplabs.hp.com (Ken "Turk" Turkowski)
Message-Id: <267@apple.UUCP>
*Increasing the sampling rate beyond this
"Nyquist rate" cannot result in higher fidelity*.
>>... Losses of information in processing analog signals tend to
>>be worse, and for an analog transformation to be exactly invertible, it
>>*must* preserve all the information in its input.
Including the exclusion of noise. Once noise is introduced, the signal
cannot be exactly inverted.
To pick a couple of nits:
Sampling at the Nyquist rate preserves information, but only if the proper
interpolation function is used to reconstruct the continuous signal. Often
this function is nonphysical in the sense that it extends infinitely far
in each temporal direction and contains negative coefficients that are
difficult to implement in some types of analog hardware (e.g., incoherent
optics). One of the reasons for going to digital processing is that
[approximate] sinc or Bessel functions are easier to deal with in the digital
domain. If a sampled signal is simply run through the handiest speaker
system or other nonoptimal reconstruction, sampling at a higher rate
may indeed increase fidelity.
The other two quotes are talking about two different things. No transformation
(analog or digital) is invertible if it loses information, but adding noise
to a signal may or may not degrade its information content. An analog signal
can be just as redundant as any coded digital signal -- in fact, most digital
"signals" are actually continuous encodings of discrete sequences. To talk
about invertibility one must define the information in a signal -- which,
unfortunately, depends on the observer's knowledge as much as it does on the
degrees of freedom or joint probability distribution of the signal elements.
Even "degree of freedom" and "probability" are not well defined, so that
our theories are ultimately grounded in faith and custom. Fortunately the
real world is kind: our theories tend to be useful and even robust despite
the lack of firm foundations. Philosophers may demonstrate that engineers
are building houses of cards on shifting sands, but the engineers will build
as long as their houses continue to stand.
-- Ken Laws
------------------------------
Date: Wed, 5 Nov 1986 16:00 EST
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList Digest V4 #248
With all due respect, I wonder if the digital-analog discussion could
be tabled soon. I myself do not consider it useful to catalog the
dispositions of many different persons' use of a word; in any case the
thing has simply gone past the bounds of 1200 baud communication.
Please. On to some substance.
------------------------------
End of AIList Digest
********************
∂07-Nov-86 1940 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #251
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 7 Nov 86 19:40:27 PST
Date: Wed 5 Nov 1986 21:37-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #251
To: AIList@SRI-STRIPE
AIList Digest Thursday, 6 Nov 1986 Volume 4 : Issue 251
Today's Topics:
Queries - Franz Object-Oriented Packages &
Sentient-Computer Novels &
Simulating a Neural Network,
Application - Snooker-Playing Robots,
Ethics - Moral Responsibility,
Seminars - Planning Simultaneous Actions (UPenn) &
Scientific Discovery (CMU) &
Machine Inductive Inference (CMU) &
Case-Based Learning System (Rutgers)
----------------------------------------------------------------------
Date: Wed, 5 Nov 86 13:08:28 EST
From: weltyc%cieunix@CSV.RPI.EDU (Christopher A. Welty)
Subject: Looking for Franz OO packages
I am looking for information on Object Oriented extensions to
Franz Lisp. I know that someone (U of Maryland?) came out with a flavors
package for Franz, if someone can point me in the right direction there
it would be appreciated, as well as any info on other packages...
------------------------------
Date: 5 Nov 86 23:45:05 GMT
From: gknight@ngp.utexas.edu (Gary Knight)
Subject: Canonical list of sentient computer novels
I am trying to compile a canonical list of SF *novels* dealing with (1)
sentient computers, and (2) human mental access to computers or computer
networks. Examples of the two categories (and my particular favorites as well)
are:
A) SENTIENT COMPUTERS
The Adolescence of P-1, by Thomas J. Ryan
Valentina: Soul in Sapphire, by Joseph H. Delaney and Marc Stiegler
Cybernetic Samurai, by (I forget)
Coils, by Roger Zelazny
B) HUMAN ACCESS
True Names, by Vernor Vinge
Neuromancer and Count Zero, by William Gibson
I'm not sure how this is done, but my thought is for all of you sf-fans
out there to send me e-mail lists of such novels (separate, by category A and
B), and I'll compile and post the ultimate canonical version. I've heard that
this exercise was undertaken a year or so ago, but I don't have access to that
list and besides I'd like to get fresh input anyway (and recent qualifying
books).
So let me hear from you . . . .
Gary
--
Gary Knight, 3604 Pinnacle Road, Austin, TX 78746 (512/328-2480).
Biopsychology Program, Univ. of Texas at Austin. "There is nothing better
in life than to have a goal and be working toward it." -- Goethe.
------------------------------
Date: 30 Oct 86 15:20:24 GMT
From: ihnp4!inuxc!iuvax!cdaf@ucbvax.Berkeley.EDU (Charles Daffinger)
Subject: Re: simulating a neural network
In article <151@uwslh.UUCP> lishka@uwslh.UUCP [Chris Lishka] writes:
>
>...
> Apparently Bell Labs (I think) has been experimenting with neural
>network-like chips, with resistors replacing bytes (I guess). They started
>out with about 22 'neurons' and have gotten up to 256 or 512 (can't
>remember which) 'neurons' on one chip now. Apparently these 'neurons' are
>supposed to run much faster than human neurons...
↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
What bothers me is that the performance is rated upon speed. Unlike the
typical syncronous digital computer, neuronal networks are asyncronous,
communicating via a temporal discharge of 'spikes' through axons which vary
considerably in length, as well as speed, and exploit the use of SLOW signals
just as they do those of FAST signals. (look at the neral mechanism for a
reflex, or for that of focusing the eye, as an example).
I am curious as to how much of the essence of their namesakes was really
captured in these 'neurons'?
-charles
--
... You raise the blade, you make the change, you re-arrange me til I'm sane...
Pink Floyd
------------------------------
Date: 3 Nov 1986 13:49:29 GMT
From: Icarus Sparry <cc←is%ux63.bath.ac.uk@Cs.Ucl.AC.UK>
Subject: Snooker playing robots
This is being posted on behalf of another member of staff, who is
not able to get through the UCL gateway
------
Newsgroups: mod.ai
Subject: Re: Robot Snooker-player
Summary:
Expires:
References: <861020-061334-1337@Xerox>
Sender:
Reply-To: cc←dgdc@ux63.bath.ac.uk (Clark)
Followup-To:
Distribution:
Organization: University of Bath, England
Keywords:
I believe you will find the robot snooker player at Bristol University,
England. I too saw a local tv news program about it last year.
I think the AI group is in one of the Engineering Departments.
Doug Clark
Bath University
----------
Icarus
Mr I. W. J. Sparry Phone 0225 826826 x 5983
Head of Microcomputer Unit Telex 449097
University of Bath e-mail:
Claverton Down cc←is@UK.AC.BATH.UX63
Bath BA2 7AY !mcvax!ukc!hlh!bath63!cc←is
England cc←is%ux63.bath.ac.uk@ucl-cs.arpa
------------------------------
Date: Wed, 5 Nov 86 12:25:26 est
From: Randy Goebel LPAIG
<rggoebel%watdragon.waterloo.edu@CSNET-RELAY.ARPA>
Subject: Re: moral responsibility
Patrick Hayes writes
> ...Weizenbaum has made a successful career by
> systematically attacking AI research on the grounds that it is somehow
> immoral, and finding a large and willing audience.
Weizenbaum does, indeed and unfortunately, attract a large, willing and
naive audience. For some reason, there seems to be a large not-quite-
computer-literate population that wants to believe that AI is potentially
dangerous to ``real'' intelligence. But it is not completely fair to
conclude that Weizenbaum believes AI to be immoral; it is correct for
Patrick to qualify his conclusion as ``somehow'' immoral. Weizenbaum
acknowledges the general concept of intelligence, with both human and artificial
kinds as manifestations. He even prefers the methodology of the artificial
kind, especially when it relieves us from experiments on, say, the visual
cortex of cats.
Weizenbaum does claim that certain aspects of AI are immoral but, as the
helicopter example illustrates, his judgment is not exclusive to AI. As AI
encroaches most closely to those things Weizenbaum values (e.g., human
dignity, human life, human emotions), it is natural for him to speak about
the potential dangers that AI poses. I suspect that, if Weizenbaum were
a nuclear physicist instead of a computer scientist, he would focus more
attention on the immorality of fission and fusion.
It is Weizenbaum's own principles of morality that determine the judgements.
He acknowledges that, and places his prinicples in the public forum every
time he speaks.
------------------------------
Date: Mon, 3 Nov 86 14:27 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Planning Simultaneous Actions (UPenn)
Computer and Information Science Colloquium
University of Pennsylvania
3-4:30 pm Thursday, November 6, 1986
Room 216 - Moore School
PLANNING SIMULTANEOUS ACTIONS IN TEMPORALLY RICH WORLDS
Professor James Allen
Department of Computer Science
University of Rochester
This talk describes work done with Richard Pelavin over the last few years.
We have developed a formal logic of action that allows us to represent
knowledge and reason about the interactions between events that occur
simultaneously or overlap in time. This includes interactions between two
(or more) actions that a single agent might perform simultaneously, as well
as interactions between an agent's actions and events occuring in the
external world. The logic is built upon an interval-based temporal logic
extended with modal operators similar to temporal necessity and a
counterfactual operator. Using this formalism, we can represent a wide
range of possible ways in which actions may interact.
------------------------------
Date: 4 Nov 86 15:44:08 EST
From: Steven.Minton@k.cs.cmu.edu
Subject: Seminar - Scientific Discovery (CMU)
As usual, 3:15 in 7220. This week's speaker is Deepak Kulkarni.
Title: Processes of scientific discovery: Strategy of Experimentation
KEKADA is a program that models some strategies of experimentation
which scientists use in their research. When augmented with
appropriate background knowledge, it can simulate in detail Krebs' course of
discovery of urea synthesis. Williamson's discovery of alcohol-structure is
another discovery it can simulate.
I would like to discuss the general mechanisms used in the system and some
half-baked ideas about further work on the system.
-----
Deepak told me that he's very interested in getting feedback on some
of his ideas for further work. I'm hoping that we'll have a lively
feedback session.
- Steve
------------------------------
Date: 27 Oct 86 14:20:41 EST
From: Lydia.Defilippo@cad.cs.cmu.edu
Subject: Seminar - Machine Inductive Inference (CMU)
Dates: 3-Nov-86
Time: 4:00
Cboards: general
Place: 223d Porter Hall
Type: Philosophy Colloquium
Duration: one hour
Who: Scott Weinstein, University of Pennsylvania
Topic: Some Recent Results in the Theory of Machine Inductive Inference
Host: Dan Hausman
The talk will describe recent research by Dan Osherson, Mike Stob and
myself on a variety of topics of epistemological interest in the
theory of machine inductive inference. The topics covered will
include limitations on mechanical realizations of Bayesian inference
methods, the synthesis of inference machines from descriptions of the
problem domains for which they are intended and the identification of
relational structures.
------------------------------
Date: 29 Oct 86 22:57:40 EST
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Seminar - Case-Based Learning System (Rutgers)
TITLE: Memory Access Techniques for a Case-based
Learning System
SPEAKER: Wendy Lehnert
DATE: Monday, November 3
LOCATION: Princeton University, Green Hall, Langfeld Lounge
TIME: 12:00 - 1:00 p.m.
Abstract
Traditionally, symbolic processing techniques in artificial
intelligence have addressed "high-level" cognitive tasks
like expert reasoning, natural language processing,
and knowledge acquisition. At the same time, a separate
paradigm of connectionist techniques has addressed
"low-level" perceptual problems like word recognition,
stereoscopic vision and speech recognition. While
symbolic computation models are frequently characterized as
brittle, difficult to extend, and exceedingly fragile, many
connectionist models exhibit graceful degradation and natural
methodologies for system expansion.
In this talk, we will look at how connectionist techniques
might be useful as a strategy for indexing symbolic memory.
Our discussion will focus on two seemingly unrelated tasks:
word pronunciation and narrative summarization. We will
endeavor to show how both problems can be approached with
similar strategies for indexing memory and resolving
competing indices.
------------------------------
End of AIList Digest
********************
∂07-Nov-86 2215 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #252
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 7 Nov 86 22:15:24 PST
Date: Wed 5 Nov 1986 21:53-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #252
To: AIList@SRI-STRIPE
AIList Digest Thursday, 6 Nov 1986 Volume 4 : Issue 252
Today's Topics:
Funding - NSF Knowledge and Database Systems Awards
----------------------------------------------------------------------
Date: Fri 31 Oct 86 11:31:36-CST
From: ICS.DEKEN@R20.UTEXAS.EDU
Subject: Knowledge and Database Systems Awards - NSF
Fiscal Year 1986 Research Projects
Funded by the Information Science Program
(now Knowledge and Database Systems Program)
A complete listing of these awards, including
short descriptive abstracts of the research
is available by writing to:
Joseph Deken, Director
Knowledge and Database Systems Program
National Science Foundation
1800 G Street NW
Washington, DC 20550
IST-8504726
$42,196 - 12 mos.
James F. Allen
University of Rochester
Plan-Based Approaches to Extended Dialogues
- - -
BNS-8518675
$40,000 - 12 mos.
James A. Anderson
Brown University
Cognitive Applications of Matrix Memory Models
- - -
IST-8511531
$19,097 - 12 mos.
Robert Berwick
Massachusetts Institute of Technology
Learnability and Parsability
- - -
DCR-8603231
$27,000 - 12 mos.
Alan W. Biermann
Duke University
Dialog Processing for Voice Interactive Problem Solving
- - -
IST-8612177
$9,747 - 12 mos.
Jeffrey Bonar
University of Pittsburgh
Partial Support for Third International Conference on Artificial
Intelligence and Education, Pittsburgh, PA, May 1987
- - -
IST-8604923
$64,660 - 12 mos.
Alan H. Borning
University of Washington
Automatic Generation of Interactive Displays
- - -
IST-8643739
$121,074 - 12 mos.
Bruce C. Buchanan
Stanford University
Information Structure and Use in Knowledge-Based Expert
Systems
- - -
IST-8607303
$10,000 - 12 mos.
Kathleen M. Carley
Carnegie-Mellon University
Knowledge Acquisition as a Social Phenomenon
- - -
IST-8515005
$75,900 - 12 mos.
Eugene Charniak
Brown Unive
A Single-Semantic-Process Theory of Parsing
- - -
IST-8644629
$60,449 - 12 mos.
Eugene Charniak
Brown University
An Approach to Abductive Inference in Artificial Intelligence Systems
- - -
IST-8608362
$172,9212 - 12 mos.
Richard Cullingford
Georgia Institute of Technology
Robust Interaction and Natural Problem-Solving in Advice-Giving Systems
- - -
IST-8506706
$60,942 - 12 mos.
Donald Dearholt
New Mexico State University
Properties of Networks Derived from Proximities
- - -
IST-8518706
$37,243 - 12 mos.
Andre de Korvin
Indiana University Foundation
Modeling Goal Uncertainty and Goal Shaping in a Generalized
Information System
- - -
IST-8609441
$20,205 - 6 mos.
Michael L. Dertouzos
Massachusetts Inst. Tech.
Conference on Cellular Automata: Parallel Information Processing for
Mathematics and Science
- - -
IST-8519926
$73,460 - 12 mos.
Thomas G. Dietterich
Oregon State University
Learning by Experimentation
- - -
IST-8519924
$43,489 - 12 mos.
John W. DuBois
University of California at Los Angeles
Information Transfer Constraints and Strategies in Natural
Language Communication
- - -
DCR-8602385
$25,000 - 12 mos.
Wayne Dyksen and Mikhail Atallah
Purdue University
High Level Systems for Scientific Computing
- - -
IST-8609123
$93,156 - 24 mos.
Andrew U. Frank
University of Maine at Orono
A Formal Model for Representation and Manipulation of Spatial
Subdivisions in Information Systems
- - -
IST-8611673
$18,000 - 3 mos.
Thomas Gay
University of Connecticut Health Center
Travel Grant: U.S. - U.S.S.R. Symposium on Information Coding and
Transmission in Biological Systems, October 3-13, 1986
- - -
IST-8512419
$70,867 - 12 mos.
Richard Granger
University of California, Irvine
Unification of Lexical, Syntactic, and Pragmatic Inference in
Understanding
- - -
IST-8509860
$73,479 - 12 mos.
Robert M. Gray
Stanford University
The Application of Information Theory to Pattern Recognition and the
Design of Decision Tree Classifiers
- - -
IST-8603943
$134,694 - 18 mos.
Max Henrion
Carnegie Mellon University
A Comparison of Methods for Representing Uncertainty in Expert Systems
- - -
DCR-8608311
$45,000 - 12 mos.
Lawrence J. Henschen
Northwestern University
Logic and Databases
- - -
IST-8645349
$153,289 - 12 mos.
Richard J. Herrnstein
Harvard University
A Comparative Approach to Natural and Artificial Visual Information
Processing
- - -
IST-8520359
$70,735 - 12 mos.
Geoffrey Hinton
Carnegie-Mellon University
Search Methods for Massively Parallel Networks
- - -
IST-8511541
$69,815 - 12 mos.
Richard B. Hull
University of Southern California
Investigation of Practical and Theoretical Aspects of Semantic Database
Models
- - -
IST-8643740
$98,507 - 12 mos.
Ray Jackendoff and Jane Grimshaw
Brandeis University
Syntactic and Semantic Information in a Natural Language Lexicon
- - -
IST-8512108
$99,394 - 12 mos.
Hans Kamp
University of Texas at Austin
Logic Representation of Attitudes for Computer Natural Language
Understanding
- - -
IST-8644864
$35,721 - 12 mos.
Abraham Kandel
Florida State University
Analysis and Modeling of Imprecise Information in Uncertain Environments
- - -
IST-8542811
$65,414 - 12 mos.
R.L. Kashyap
Purdue University
Research on Inference Procedures with Uncertainty
- - -
IST-8644676
$74,752 - 12 mos.
George J. Klir
State University of New York at Binghamton
Possibilistic Information: Theory and Applicability
- - -
IST-8552925
$54,250 - 12 mos.
Richard E. Korf
University of California at Los Angeles
Presidential Young Investigator Award : Machine Learning
- - -
IST-8518307
$15,750 - 12 mos.
Donald H. Kraft
Louisiana State University
Travel to the ACM Conference on Research and Development in
Information Retrieval: Pisa, Italy; September 8-10, l986
- - -
DCR-8602665
$45,720 - 12 mos.
Benjamin J. Kuipers
University of Texas at Austin
Knowledge Representations for Expert Causal Models
- - -
RII-8600412
$10,000 - 12 mos.
Jill H. Larkin
University of California at Berkeley
Developing the Instructional Power of Modern Personal Computing
- - -
IST-8600412
$10,000 - 12 mos.
Wendy G. Lehnert
University of Massachusetts at Amherst
Presidential Young Investigator Award: Natural Language Computing
Systems
- - -
IST-8603697
$5,000 - 12 mos.
Michael E. Lesk
Bell Communications Research
Workshop on Document Generation Principles
- - -
IST-8602765
$76078 - 12 mos.
R. Duncan Luce
Harvard University
Measurement: Axiomatic and Meaningfulness Studies
- - -
IST-8444028
$62,500 - 12 mos.
David Maier
Oregon Graduate Center
Presidential Young Investigator Award: Foundations of Knowledge
Management Systems
- - -
IST-8604977
$46,956 - 12 mos.
David Maier
Oregon Graduate Center
Automatic Generation of Interactive Displays
- - -
IST-8642813
$25,500 - 12 mos.
Gerald S. Malecki
Office of Naval Research
Committee on Human Factors
- - -
IST-8606187
$19,650 - 12 mos.
James L. McClelland
Carnegie-Mellon University
Workshop of Parallel Distributed Processing in Information and
Cognitive Research (Washington, D.C. ; February 28 - March 1, 1986)
- - -
IST-8451438
$37,500 - 12 mos.
Kathleen R. McKeown
Columbia University
Presidential Young Investigator Award: Natural Language Interfaces
- - -
IST-8520217
$115,220 - 12 mos.
Douglas P. Metzler
University of Pittsburgh
An Expert System Approach to Syntactic Parsing and Information Retrieval
- - -
IST-8512736
$137,503 - 24 mos.
David Mumford
Harvard University
The Parsing of Images
- - -
IST-8604282
$2,447 - 12 mos.
Kent Norman
University of Maryland at College Park
Developing an Effective User Evaluation Questionnaire for Interactive
Systems
- - -
IST-8645347
$80,492 - 12 mos.
Donald E. Nute
University of Georgia
Discourse Representation for Natural Language Processing
- - -
IST-8645348
$49,211 - 12 mos.
Donald E. Nute
University of Georgia
Hypothetical Reasoning and Logic Programming
- - -
IST-8642477
$86,969 - 12 mos.
Robert N. Oddy
Syracuse University
Representations for Anomalous States of Knowledge in Information
Retrieval
- - -
IST-8609201
$44,905 - 12 mos.
Daniel Osherson
Syracuse University
A Computational Approach to Decision-Making
- - -
IST-8544976
$197,055 - 12 mos.
Charles Parsons and Isaac Levi
Columbia University
The Structure of Information in Science: Fact Formulas and Discussion
Structures in Related Subsciences
- - -
IST-8642841
$12,000 - 12 mos.
William J. Rapaport
State University of New York - System Office
Logical Foundations for Belief Representation
- - -
IST-8644984
$62,500 - 12 mos.
James A. Reggia
University of Maryland at College Park
Presidential Young Investigator Award: Abductive Inference Models in
Artificial Intelligence
- - -
IST-8644983
$123,221 - 12 mos.
Whitman A. Richards
Massachusetts Institute of Technology
Natural Computation: A Computational Approach to Visual Information
Processing
- - -
IST-8604530
$49,953 - 12 mos.
Fred S. Roberts
Rutgers University
Scales of Measurement and the Limitations they Place on Information
Processing
- - -
IST-8603407
$27,000 - 12 mos.
Robert D. Rodman
North Carolina State University
Dialog Processing for Voice Interactive Problem Solving
- - -
IST-8640925
$198,800 - 12 mos.
Naomi Sager
New York University
Language As a Database Structure
- - -
IST-8640053
$59,178 - 12 mos.
Sharon C. Salveter
Boston University
Transportable Natural Language Database Update
- - -
IST-8610293
$80,630 - 12 mos.
Glenn R. Shafer
University of Kansas Main Campus
Belief Functions in Artificial Intelligence
- - -
IST-8603214
$85,583 - 12 mos.
William Shaw
University of North Carolina
An Evaluation and Comparison of Term and Citation Indexing
- - -
DMS-8606178
$20,000 - 12 mos.
Paul C. Shields
University of Toledo
Mathematical Sciences: Entropy in Ergodic Theory, Graph Theory and
Statistics
- - -
IST-8607849
$101,839 - 12 mos.
Edward Smith
BBN Laboratories, Inc.
A Computational Approach to Decision-Making
- - -
IST-8609599
$80,963 - 12 mos.
Paul Smolensky
University of Colorado at Boulder
Inference in Massively Parallel Artificial Intelligence Systems
- - -
IST-8644907
$15,634 - 12 mos.
Frederik Springsteel
University of Missouri
Formalization of Entity-Relationship Diagrams
- - -
IST-8640120
$66,004 - 12 mos.
Robert E. Stepp
University of Illinois at Urbana
Discovering Underlying Concepts in Data Through Conceptual Clustering
- - -
IST-8516313
$60,907 - 12 mos.
Richmond H. Thomason
Mellon-Pitt-Carnegie Corp.
Nonmonotonic Reasoning
- - -
IST-8516330
$63,854 - 12 mos.
David S. Touretzky
Carnegie-Mellon University
Distributed Representations for Symbolic Data Structures
- - -
IST-8517289
$164,786 - 12 mos.
Joseph F. Traub
Columbia University
The Information Level: Effective Computing with Partial, Contaminated,
and Costly Information
- - -
IST-8544806
$121,222 - 12 mos.
Jeffrey D. Ullman
Stanford University
Implementation of Logical Query Languages for Databases
- - -
IST-8511348
$30,383 - 12 mos.
Kenneth Wexler
University of California at Irvine
Learnability and Parsability
- - -
IST-8514890
$80,000 - 12 mos.
R. Wilensky and R. Alterman
University of California at Berkeley
Adaptive Planning
- - -
IST-8600788
$81,660 - 12 mos.
Robert T. Winkler
Duke University
Combining Dependent Information: Models and Issues
- - -
IST-8644767
$38,474 - 12 mos.
Ronald R. Yager
Iona College
Specificity Measures of Information in Possibility Distributions
- - -
IST-8644435
$50,878 - 12 mos.
Po-Lung Yu
University of Kansas
Habitual Domain Analysis for Effective Information Interface and
Decision Support
- - -
IST-8642900
$108,182 - 12 mos.
Lotfi A. Zadeh
University of California at Berkeley
Management of Uncertainty in Expert Systems
- - -
IST-8605163
$19,776 - 12 mos.
Maria Zemankova
University of Tennessee at Knoxville
Travel to the International Conference on Information Processing and
Management of Uncertainty in Knowledge-Based Systems
- - -
IST-8600616
$97,727 - 12 mos.
Pranas Zunde
Georgia Institute of Technology
A Study of Word Association Aids in Information Retrieval
------------------------------
End of AIList Digest
********************
∂08-Nov-86 0130 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #253
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 8 Nov 86 01:30:17 PST
Date: Wed 5 Nov 1986 22:11-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #253
To: AIList@SRI-STRIPE
AIList Digest Thursday, 6 Nov 1986 Volume 4 : Issue 253
Today's Topics:
Funding - NSF Robotics and Machine Intelligence Awards
----------------------------------------------------------------------
Date: Fri 31 Oct 86 13:03:11-CST
From: ICS.DEKEN@R20.UTEXAS.EDU
Subject: Active Awards in the Robotics and Machine Intelligence Program (NSF)
Fiscal Year 1986 Research Projects
Funded by the Intelligent Systems Program
(now Robotics and Machine Intelligence Program)
A complete listing of these awards, including
short descriptive abstracts of the research
is available by writing to:
Y.T. Chien, Director
Robotics and Machine Intelligence Program
National Science Foundation
1800 G Street NW
Washington, DC 20550
----- Computer Vision and Image Processing -----
SRI International; Alex P. Pentland; {Perceptual Organization and the
Representation of Natural Scenes}; (DCR-8519283); $78060;12 months.
Stanford University; Paul Switzer; {Statistical Theory and Methods for
Processing Spatial Imagery (Mathematical Sciences and Computer
Research)}; (DMS-8411300 A02); $15000; 12 months; (Joint support with
the Statistics and Probability Program - Total Grant $48200).
University of California - Berkeley; Alberto Grunbaum;{Reconstruction
with Limited and Noisy Data (Mathematical Sciences and Computer
Research)}; (DMS-8403232 A02); $8000; 12 months;(Joint support with
the Applied Mathematics Program - Total Grant$66850).
University of Miami; Tzay Y. Young; {Three-Dimensional Motion Analysis
Using Shape Change Information (Computer Research)};(DCR-8509737 A01);
$39871; 12 months.
University of Illinois - Urbana; Thomas S. Huang;
{Acquisition, Representation and Manipulation of Time-Varying
Spatial Information (Computer Research)}; (DCR-8415325 A02); $14768;
6 months.
University of Maryland - College Park; Azriel Rosenfeld;{Perceptual
Organization in Computer Vision: Pyramid-based Approaches};
(DCR-8603723); $120376; 12 months.
University of Maryland - College Park; Azriel Rosenfeld;{Workshop on
Graph Grammars and their Application to
Computer Science Leesburg Virginia October1986}; (DCR-8504840);$26235;
18 months.
Massachusetts Institute of Technology; Whitman Richards;{Natural
Computation: A Computational Approach to Visual Information Processing
(Information Science and ComputerResearch)}; (IST-8312240 A02);
$61500; 12 months; (Joint support with the Information Science Program
- Total Grant $184721).
Michigan State University; George C. Stockman and Anil K.
Jain;{Feature Extraction and Evaluation in Recognition of 3D
Objects};(DCR-8600371); $55522; 12 months.
University of Michigan - Ann Arbor; Ramesh Jain; {Ego-Motion Complex
Logarithmic Mapping}; (DCR-8517251); $48039; 12 months.
University of Minnesota; William B. Thompson; {Determining Spatial
Organization from Visual Motion (Computer Research)};(DCR-8500899
A01); $40750; 12 months.
University of Rochester; Dana H. Ballard; {Parameter Networks and
Spatial Cognition}; (DCR-8602958); $80464; 12 months.
Carnegie-Mellon University; Steven A. Shafer and Takeo Kanade;{Optical
Modeling in Image Understanding: Color Gloss and Shadows (Computer
Research)}; (DCR-8419990 A01); $40475; 12 months.
University of Wisconsin - Madison; Charles R. Dyer; {Parallel Vision
Algorithms for Shared-Memory and Pipeline
Multiprocessors};(DCR-8520870); $134857; 24 months.
----- Natural Language and Signal Understanding -----
SRI International; Douglas Appelt; {Natural Language Utterance
Planning(Computer Research and Information Science)}; (DCR-8641243);
$95210; 12 months;
Indiana University - Bloomington; Robert F. Port Stan C. Kwasny and
Daniel P. Maki; {Data-Driven Speech Recognition Using Prosody};
(DCR-8518725);$101055; 12 months.
Massachusetts Institute of Technology; Robert C. Berwick;
{PYI:(Computer Research)}; (DCR-8552543); $25000; 12 months.
New York University; Ralph Grishman (in collaboration with Lynette
Hirshman Buroughs Corporation); {Industry/University Cooperative
Research: Acquisition and Use of Semantic Information for Natural
Language Processing (ComputerResearch)}; (DCR-8501843 A01); $78400; 12
months; (Joint support with theIndustry/University Cooperative
Research Program - Total Grant $93400).
Duke University; Alan W. Biermann; {Dialog Processing for Voice
Interactive Problem Solving}; (DCR-8603231); $27676; 12 months; (Joint
support with the Special Projects Program the Information Science
Program and the Information Technology Program - Total Grant $69676).
North Carolina State University - Raleigh; Robert Rodman;
{Dialogue Processing for Voice Interactive Problem Solving};
(DCR-8603407); $4000; 24months; (Joint support with the Special
Projects Program the Information Science Program and the Information
Technology Program - Total Grant $65276).
Carnegie-Mellon University; Ronald A. Cole and Richard M. Stern;
{Phonetic Classification =9Q%9U=UM↓ Speech}; (DCR-8512695); $127523; 24
months.
University of Pennsylvania; Aravind K. Joshi; {Research In Natural
Language Processing (Computer Research)}; (DCR-8545755 A01); $160000;
12 months.
Burroughs Corporation; Lynette Hirshman (in collaboration with
Ralph Grishman New York University; {Industry/University Cooperative
Research: Acquisition and Use of Semantic Information for Natural
Language Processing(Computer Research)}; (DCR-8502205 A01); $58364; 12
months; (Joint support with the Industry/University Cooperative
Research Program - Total Grant$73364).
----- Concept Learning and Inference -----
Yale University; Dana C. Angluin; {Algorithms for Inductive Inference
(Computer Research)}; (DCR-8404226 A01); $88807; 12months.
Northwestern University; Lawrence J. Henschen; {Logic and Databases};
(DCR-8608311); $44845; 12 months; (Joint support with the Information
Science Program - Total Grant $89845).
University of Chicago; James Royer; {Theory of Machine Learning};
(DCR-8602991); $17000; 24 months; (Joint support withTheoretical
Computer Science Program - Total Grant $51045).
University of Illinois - Urbana; R. S. Michalski; {Studies in Computer
Inductive Learning and Plausible Inference}; (DCR-8645223A02);
$135000; 12 months.
University of Southwestern Louisiana; Rasiah
Loganantharaj;{Theoretical and Implementational Aspects of Parallel
Theorem Proving}; (DCR-8603039); $34994; 12 months.
University of Maryland - College Park; Jack Minker; {Workshop
on Foundations of Deductive Databases and Logic
Programming College Park Maryland August 1986}; (DCR-8602676); $25390; 12
months.
Rutgers University - Busch Campus; Tom M. Mitchell; {PYI:(Computer
Research)}; (DCR-8351523 A03); $25000; 12 months.
State University of New York - Albany; Neil V. Murray;{Automated
Reasoning with Path Resolution and Semantic Graphs};(DCR-8600848);
$34981; 12 months.
Carnegie-Mellon University; Peter B. andrews; {Automated
Theorem Proving in Type Theory (Computer Research)}; (DCR-8402532
A02);$87152; 12 months.
Carnegie-Mellon University; Elaine Kant and Allen Newell;{Algorithm
Design and Discovery (Computer Research)}; (DCR-8412139A01); $57328;
12 months.
University of Texas - Austin; Michael P. Starbird and Woodrow
W.Bledsoe; {Automatic Theorem Proving and Applications
(ComputerResearch)}; (DCR-8313499 A02); $150930; 12 months.
University of Wyoming; Michael J. Magee; {A Theorem Proving Based
System for Recognizing Three-Dimensional Objects};(DCR-8602555);
$37530; 12 months.
----- Knowledge Representation and Problem Solving -----
Stanford University; John McCarthy; {Artificial Intelligence
(ComputerResearch)}; (DCR-8414393 A01); $134328; 12 months.
Stanford University; Edward A. Feigenbaum and Charles
Yanofsky;{MOLGEN-Applications of Artificial Intelligence to Molecular
Biology Research in Theory Formation Testing and Modification (Computer
Research)};(DCR-8310236 A02); $135000; 12 months.
University of California - Berkeley; Lotfi A. Zadeh; {Fuzzy Logic as a
Basis for Commonsense Reasoning and Inference in Expert Systems
(Computer Research)};(DCR-8513139 A01); $100685; 12 months.
University of California - Los Angeles; Judea Pearl; {Studies in
Heuristics(Computer Research)}; (DCR-8501234 A01); $7895571; 12
months.
University of California - Los Angeles; Judea Pearl and monthshe
Ben-Bassat;{Toward a Computational Model of Evidential Reasoning
(Computer Research)};(DCR-8313875 A02); $95291; 12 months.
University of Southern California; Peter Waksman; {Grid Analysis - A
Theory of Form Perception (Mathematical Sciences and Computer
Research}; (DMS-8602025); $5000; 12 months (Joint support with the
Applied Mathematics Program- Total Grant $15600).
Yale University; Paul Hudak; {DAPS: Systems Support For AI
(Computer Research)}; (DCR-8403304 A01); $62363; 12 months.
University of Maryland - College Park; Dana S. Nau; {PYI:
(ComputerResearch)}; (DCR-8351463 A02); $62500; 12 months.
University of Maryland - College Park; Laveen N. Kanal; {Parallel
Problem Solving and Applications in Artificial Intelligence};
(DCR-8504011 A01);$74922; 12 months.
University of Maryland - College Park; Hanan Samet; {Hierarchical
Data Structures}; (DCR-8605557); $45473; 12 months.
University of Massachusetts - Amherst; Victor R.
Lesser Krithivasan Ramamritham and Edward M. Riseman; {A Research
Facility for Cooperative Distributed Computing (Computer Research)};
(DCR-8644692); $590977; 12 months;(Joint support with the Coordinated
Experimental Research - Total Grant$984962).
University of Michigan - Ann Arbor; Arthur W. Burks; {Languages and
Architectures for Parallel Computing with Classifier Systems
(ComputerResearch)}; (DCR-8305830 A03); $6381.
University of Minnesota; James R. Slagle; {Expert Systems
Questioning Procedures Based on Merit (Computer Research)};
(DCR-8512857 A01); $82235; 12months.
University of New Hampshire; Eugene C. Freuder and Michael J. Quinn;
{Copingwith Complexity in Constraint Satisfaction Problems};
(DCR-8601209); $32708;12 months; (Joint support with the Theoretical
Computer Science Program - TotalGrant $42708).
Rutgers University - Busch Campus; Saul Amarel and Charles
Schmidt;{Exploration of Problem Reformulation and Strategy
Acquisition}; (DCR-8318075A03); $73547; 12 months; (Joint support with
the Information TechnologyProgram - Total Grant $102706).
Rutgers University; Tomasz Imielinski; {Processing Incomplete
Knowledge-A Database Approach (Computer Research)}; (DCR-8504140 A01);
$57411; 12 months.
New Mexico State University; Derek P. Partridge; {Workshop on the
Foundations of Artificial Intelligence Las Cruces New Mexico February
1986};(DCR-8514964); $15000; 12 months.
Cornell University; Robert L. Constable; {Experiments with a
Program Refinement System (Computer Research)}; (DCR-8303327 A03);
$60000; 12 months;(Joint support with the Software Engineering Program
and the Software Systems Science Program - Total Grant $180247).
Iona College; Ronald R. Yager; {Methods of Evidential Reasoning
(ComputerResearch)}; (DCR-8513044); $38600; 12 months.
University of Rochester; James F. Allen; {PYI: (Computer
Research)};(DCR-8351665 A02); $25000; 12 months.
University of Rochester; James F. Allen; {Temporal World Models for
Problem Solving (Computer Research)}; (DCR-8502481 A01); $38375; 12
months.
University of Texas - Austin; Benjamin J. Kuipers; {Deep and Shallow
Models in the Knowledge Base}; (DCR-8602665); $45720; 12 months; (Joint
support withthe Information Science Program - Total Grant $91440).
----- Automation and Robotics -----
Arizona State University; Kathleen M. Mutch; {Robotic Navigation
Using Dynamic Imagery}; (DCR-8601798); $69955; 12 months.
University of Illinois - Urbana; Thomas S. Huang;
{Acquisition Representation and Manipulation of Time-Varying Spatial
Information (ComputerResearch)}; (DCR-8640776 A01); $70752; 12 months.
University of Massachusetts - Amherst; Edward M. Riseman and Arthur
S.Gaylord; {A Group Research Facility for Artificial
Intelligence Distributed Computing and Software Systems (Computer
Research)}; (DCR-8318776 A03);$15000; 12 months; (Joint support with
the Special Projects Program and the Software Engineering Program -
Total Grant $80000).
Cornell University-Endowed; John E. Hopcroft; {An International
Workshop on Geometric Reasoning to be held June 30 - July 2 1986 at
Keble College Oxford University U.K.}; (DCR-8605077); $22423; 12 months.
Cornell University; John Hopcroft and Alan Demers; {A Program of
Research in Robotics}; (DMC-8640765 A02); $91072; 12 months;
Cornell University; John E. Hopcroft and Kuo K. Wang; {A Program of
Researchin Representing Physical Objects}; (DCR-8644262 A01); $104238;
12 months.
New York University; Ernest Davis; {Physical and Spatial Reasoning
with Solid Objects}; (DCR-8603758 & A01); $82600; 24 months.
New York University; David Lowe; {Model Based Recognition
of Three-Dimensional Objects (Computer Research)}; (DCR-8502009
A01);$49700; 12 months.
New York University; Colm O'Dunlaing and Chee-Keng Yap; {Motion
Planning Problems in Robotics: Algorithmic Issues (Computer Research)};
(DCR-8401898A02); $97300; 12 months.
Carnegie Mellon University; Takeo Kanade and Charles Thorpe;
{Understanding 3-D Dynamic Natural Scenes with Range Data};
(DCR-8604199); $75817; 12 months.
University of Pennsylvania; Ruzena K. Bajcsy; {Tactile Information
Processing(Computer Research)}; (DCR-8545795); $45359; 12 months;
University of Texas - Austin; J. K. Aggarwal; {Space Perception from
Multiple Sensing}; (DCR-8517583); $125000; 24 months.
University of Utah; Bir Bhanu and Thomas C. Henderson; {Computer
Aided Geometric Design Based Computer Vision (Computer Research)};
(DCR-8644518);$74997; 12 months.
------------------------------
End of AIList Digest
********************
∂08-Nov-86 0306 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #254
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 8 Nov 86 03:06:29 PST
Date: Thu 6 Nov 1986 21:38-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #254
To: AIList@SRI-STRIPE
AIList Digest Friday, 7 Nov 1986 Volume 4 : Issue 254
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 28 Oct 86 19:54:22 GMT
From: fluke!ssc-vax!bcsaic!michaelm@beaver.cs.washington.edu
(michael maxwell)
Subject: Re: Searle, Turing, Symbols, Categories
In article <10@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>michaelm@bcsaic.UUCP (me) wrote:
>
>> As an interesting thought experiment, suppose a Turing test were done
>> with a robot made to look like a human, and a human being who didn't
>> speak English-- both over a CCTV, say, so you couldn't touch them to
>> see which one was soft, etc. What would the robot have to do in order
>> to pass itself off as human?
>
>...We certainly have no problem in principle with
>foreign speakers (the remarkable linguist, polyglot and bible-translator
>Kenneth Pike has a "magic show" in which, after less than an hour of "turing"
>interactions with a speaker of any of the [shrinking] number of languages he
>doesn't yet know, they are babbling mutually intelligibly before your very
>eyes), although most of us may have some problems in practice with such a
>feat, at least, without practice.
Yes, you can do (I have done) such "magic shows" in which you begin to learn a
language using just gestures + what you pick up of the language as you go
along. It helps to have some training in linguistics, particularly field
methods. The Summer Institute of Linguistics (of which Pike is President
Emeritus) gives such classes. After one semester you too can give a magic
show!
I guess what I had in mind for the revised Turing test was not using language
at all--maybe I should have eliminated the sound link (and writing). What
in the way people behave (facial expressions, body language etc.) would cue
us to the idea the one is a human and the other a robot? What if you showed
pictures to the examinees--perhaps beautiful scenes, and revolting ones? This
is more a test for emotions than for mind (Mr. Spock would probably fail).
But I think that a lot of what we think of as human is tied up in this
nonverbal/ emotional level.
BTW, I doubt whether the number of languages Pike knows is shrinking because
of these monolingual demonstrations (aka "magic shows") he's doing. After the
tenth language, you tend to forget what the second or third language was--
much less what you learned!
--
Mike Maxwell
Boeing Advanced Technology Center
...uw-beaver!uw-june!bcsaic!michaelm
------------------------------
Date: 30 Oct 86 00:11:29 GMT
From: mnetor!utzoo!utcsri!utegc!utai!me@seismo.css.gov
Subject: Re: Searle, Turing, Symbols, Categories
In article <1@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>In reply to a prior iteration D. Simon writes:
>
>> I fail to see what [your "Total Turing Test"] has to do with
>> the Turing test as originally conceived, which involved measuring
>> up AI systems against observers' impressions, rather than against
>> objective standards... Moreover, you haven't said anything concrete
>> about what this test might look like.
>
>How about this for a first approximation: We already know, roughly
>speaking, what human beings are able to "do" -- their total cognitive
>performance capacity: They can recognize, manipulate, sort, identify and
>describe the objects in their environment and they can respond and reply
>appropriately to descriptions. Get a robot to do that. When you think
>he can do everything you know people can do formally, see whether
>people can tell him apart from people informally.
>
"respond and reply appropriately to descriptions". Very nice. Should be a
piece of cake to formalize--especially once you've formalized recognition,
manipulation, identification, and description (and, let's face it, any dumb
old computer can sort). This is precisely what I was wondering when I asked
you what this total Turing test looks like. Apparently, you haven't the
foggiest idea, except that it would test roughly the same things that the
old-fashioned, informal, does-it-look-smart-or-doesn't-it Turing test checks.
In fact, none of the criteria you have described above seems defineable in any
sense other than by reference to standard Turing test results ("gee, it sure
classified THAT element the way I would've!"). And if you WERE to define the
entire spectrum of human behaviour in an objective fashion ("rule 1:
answering, 'splunge!' to any question is hereby defined as an 'appropriate
reply'"), how would you determine whether the objective definition is useful?
Why, build a robot embodying it, and see if people consider it intelligent, of
course! The illusion of a "total" Turing test, distinct from the
old-fashioned, subjective variety, thus vanishes in a puff of empiricism.
And forget the well-that's-the-way-Science-does-it argument. It won't wash
--see below.
>> I believe that people in general dodge the "other minds" problem
>> simply by accepting as a convention that human beings are by
>> definition intelligent.
>
>That's an artful dodge indeed. And do you think animals also accept such
>conventions about one another? Philosophers, at least, seem to
>have noticed that there's a bit of a problem there. Looking human
>certainly gives us the prima facie benefit of the doubt in many cases,
>but so far nature has spared us having to contend with any really
>artful imposters. Wait till the robots begin giving our lax informal
>turing-testing a run for its money.
>
I haven't a clue whether animals think, or whether you think, for that matter.
This is precisely my point. I don't believe we humans have EVER solved the
"other minds" problem, or have EVER used the Turing test, even to try to
resolve the question of whether there exist "other minds". The fact that you
would like us to have done so, thus giving you a justification for the use of
the (informal part of) the Turing test (and the subsequent implicit basing of
the formal part on the informal part--see above), doesn't make it so.
This is where your scientific-empirical model for developing the "total"
Turing test out of the original falls down. Let's examine the development of
a typical scientific concept: You have some rough, intuitive observations of
phenomena (gravity, stars, skin). You take some objects whose properties
you believe you understand (rocks, telescopes, microscopes), let them interact
with your vaguely observed phenomenon, and draw more rigorous conclusions based
on the recorded results of these experimental interactions.
Now, let's examine the Turing test in that light: we take possibly-intelligent
robot R, whose properties are fairly well understood, and sit it in front of
person P, whose properties are something of a cipher to us. We then have them
interact, and get a reading off person P (such as, "yup, shore is smart", or,
"nope, dumb as a tree"). Now, what properties are being scientifically
investigated here? They can't have anything to do with robot R--we assume that
R's designer, Dr. Rstein, already has a fairly good idea what R is about.
Rather, it appears as though you are discerning those attributes of people
which relate to their judgment of intelligence in other objects. Of course, it
might well turn out that something productive comes out of this, but it's also
quite possible (and I conjecture that it's actually quite likely) that what you
get out of this is some scientific law such as, "anything which is physically
indistinguishable from a human being and can mutter something that sounds like
person P's language is intelligent; anything else is generally dumb, but
possibly intelligent, depending on the decoration of the room and the drug
content of P's bloodstream at the time of the test". In short, my worries
about the context-dependence and subjective quality of the results have not
disappeared in a puff of empiricism; they loom as large as ever.
>
>> WHAT DOES THE "TOTAL TURING TEST" LOOK LIKE?... Please
>> forgive my impertinent questions, but I haven't read your
>> articles, and I'm not exactly clear about what this "total"
>> Turing test entails.
>
>Try reading the articles.
>
Well, not only did I consider this pretty snide, but when I sent you mail
privately, asking politely where I can find the articles in question, I didn't
even get an answer, snide or otherwise. So starting with this posting, I
refuse to apologize for being impertinent. Nyah, nyah, nyah.
>
>
>Stevan Harnad
>princeton!mind!harnad
Daniel R. Simon
"sorry, no more quotations"
-D. Simon
------------------------------
Date: Thu, 30 Oct 86 16:09:20 EST
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: extended Turing test
In article <8610271728.AA12616@ucbvax.Berkeley.EDU>, harnad@mind.UUCP writes:
> > [I]t's misleading to propose that a veridical model of ←our← behavior
> > ought to have our "performance capacities"...I do not (yet) quarrel
> > with the principle that the model ought to have our abilities. But to
> > speak of "performance capacities" is to subtly distort the fundamental
> > problem. We are not performers!
>
> "Behavioral ability"/"performance capacity" -- such fuss over
> black-box synonyms, instead of facing the substantive problem of
> modeling the functional substrate that will generate them.
You seem to be looking at the problem as a scientist. Let me give an
example of what I mean:
Suppose you have a robot slave. (That's the practical goal of A.I.,
isn't it?) It cooks for you, makes the beds, changes the oil in your
car, puts the dog out, performs sexual favors, ... you name it. BUT--
it will not open the front door for you!
Maddened with frustration, you order an electric-eye door opener,
1950s design. It works flawlessly. Now you have everything you want.
Does the combination of robot + door-opener pass the Total Turing Test?
Is the combination a valid subject for the Test?
------------------------------
Date: Thu, 30 Oct 86 15:56:27 EST
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: how we decide whether it has a mind
In article <8610271726.AA12550@ucbvax.Berkeley.EDU>, harnad@mind.UUCP writes:
> > One (rationally) believes other people are conscious BOTH because
> > of their performance and because their internal stuff is a lot like
> > one's own.
>
> ... I am not denying that
> there exist some objective data that correlate with having a mind
> (consciousness) over and above performance data. In particular,
> there's (1) the way we look and (2) the fact that we have brains. What
> I am denying is that this is relevant to our intuitions about who has a
> mind and why. I claim that our intuitive sense of who has a mind is
> COMPLETELY based on performance, and our reason can do no better. ...
There's a complication here: Our intutions about things in our environment
change with the environment. The first time you use a telephone, you hear
an electronic reproduction of somebody's voice; you KNOW that you're talking
to a machine, not to the other person. Soon this knowledge evaporates, and
you come to think, "I talked with Smith today on the phone." You may even
have seen his face before you!
It's the same with thinking. When only living things could perceive and
adapt accordingly, people did not think of artifacts as having minds.
This wasn't stubborn of them, just honest intuition. When ELIZA came
along, it became useful for her users to think of her as having a mind.
Just like thinking you talked with Smith ...
I'd like to see less treatment of "X has a mind" as a formal proposition,
and more discussion of how we use our intuition about it. After all,
is having a mind the most important thing about somebody to you? Is
it even important at all?
------------------------------
End of AIList Digest
********************
∂08-Nov-86 0433 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #255
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 8 Nov 86 04:33:45 PST
Date: Thu 6 Nov 1986 21:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #255
To: AIList@SRI-STRIPE
AIList Digest Friday, 7 Nov 1986 Volume 4 : Issue 255
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 29 Oct 86 18:31:18 GMT
From: ubc-vision!ubc-cs!andrews@BEAVER.CS.WASHINGTON.EDU
Subject: Turing Test ad infinitum
This endless discussion about the Turing Test makes the
"eliminative materialist" viewpoint very appealing: by the
time we have achieved something that most people today would
call intelligent, we will have done it through disposing of
concepts such as "intelligence", "consciousness", etc.
Perhaps the reason we're having so much trouble defining
a workable Turing Test is that we're essentially trying to
fit a square peg into a round hole, belabouring some point
which has less relevance than we realize. I wonder what old
Alan himself would say about the whole mess.
--Jamie.
...!seismo!ubc-vision!ubc-cs!andrews
"At the sound of the falling tree... it's 9:30"
------------------------------
Date: 1 Nov 86 20:34:02 GMT
From: allegra!princeton!mind!harnad@ucbvax.Berkeley.EDU (Stevan
Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
In his second net.ai comment on the abstracts of the two articles under
discussion, me@utai.UUCP (Daniel Simon) wrote:
>> WHAT DOES THE "TOTAL TURING TEST" LOOK LIKE?... Please
>> forgive my impertinent questions, but I haven't read your
>> articles, and I'm not exactly clear about what this "total"
>> Turing test entails.
I replied (after longish attempts to explain in two separate iterations):
>"Try reading the articles."
Daniel Simon rejoined:
> Well, not only did I consider this pretty snide, but when I sent you
> mail privately, asking politely where I can find the articles in
> question, I didn't even get an answer, snide or otherwise. So starting
> with this posting, I refuse to apologize for being impertinent.
> Nyah, nyah, nyah.
The same day, the following email came from Daniel Simon:
> Subject: Hoo, boy, did I put my foot in it:
> Ooops....Thank you very much for sending me the articles, and I'm sorry
> I called you snide in my last posting. If you see a bright scarlet glow
> in the distance, looking west from Princeton, it's my face. Serves me
> right for being impertinent in the first place... As soon as I finish
> reading the papers, I'll respond in full--assuming you still care what
> I have to say... Thanks again. Yours shamefacedly, Daniel R. Simon.
This is a very new form of communication for all of us. We're just going to
have to work out a new code of Nettiquette. With time, it'll come. I
continue to care what anyone says with courtesy and restraint, and
intend to respond to everything of which I succeed in making sense.
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 1 Nov 86 18:21:12 GMT
From: allegra!princeton!mind!harnad@ucbvax.Berkeley.EDU (Stevan
Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
Jay Freeman (freeman@spar.UUCP) had, I thought, joined the
ongoing discussion about the robotic version of the Total Turing Test
to address the questions that were raised in the papers under
discussion, namely: (1) Do we have any basis for contending with the
"other minds problem" -- whether in other people, animals or machines
-- other than turing-indistinguishable performance capacity? (2) Is
the teletype version of the turing test -- which allows only
linguistic (i.e., symbolic) interactions -- a strong enough test? (3)
Could even the linguistic version alone be successfully passed by
any device whose symbolic functions were not "grounded" in
nonsymbolic (i.e., robotic) function? (4) Are transduction, analog
representations, A/D conversion, and effectors really trivial in this
context, or is there a nontrivial hybrid function, grounding symbolic
representation in nonsymbolic representation, that no one has yet
worked out?
When Freeman made his original sugestion that the symbolic processor
could have access to the robotic transducer's bit-map, I thought he
was making the sophisticated (but familiar) point that once the
transducer representation is digitized, it's symbolic all the way.
(This is a variant of the "transduction-is-trivial" argument.) My
prior reply to Freeman (about simulated models of the world, modularity,
etc.) was addressed to this construal of his point. But now I see that
he was not making this point at all, for he replies:
> ... let's equip the robot with an active RF emitter so
> it can jam the camera's electronics and impose whatever bit map it
> wishes... design a robot in the shape of a back projector, and let it
> create internally whatever representation of a human being it wishes
> the camera to see, and project it on its screen for the camera to
> pick up. Such a robot might do a tolerable job of interacting with
> other parts of the "objective" world, using robot arms and whatnot
> of more conventional design, so long as it kept them out of the
> way of the camera... let's create a vaguely anthropomorphic robot and
> equip its external surfaces with a complete covering of smaller video
> displays, so that it can achieve the minor details of human appearance
> by projection rather than by mechanical motion. Well, maybe our model
> shop is good enough to do most of the details of the robot convincingly,
> so we'll only have to project subtle details of facial expression.
> Maybe just the eyes.
> ... if you are going to admit the presence of electronic or mechanical
> devices between the subject under test and the human to be fooled,
> you must accept the possibility that the test subject will be smart
> enough to detect their presence and exploit their weaknesses...
> consider a robot that looks no more anthropomorphic than your vacuum
> cleaner, but that is possessed of moderate manipulative abilities and
> a good visual perceptive apparatus.
> Before the test commences, the robot sneakily rolls up to the
> camera and removes the cover. It locates the connections for the
> external video output, and splices in a substitute connection to
> an external video source which it generates. Then it replaces the
> camera cover, so that everything looks normal. And at test time,
> the robot provides whatever image it wants the testers to see.
> A dumb robot might have no choice but to look like a human being
> in order to pass the test. Why should a smart one be so constrained?
From this reply I infer that Freeman is largely concerned with the
question of appearance: Can a robot that doesn't really look like a
person SIMULATE looking like a person by essentially symbolic means,
plus add-on modular peripherals? In the papers under discussion (and in some
other iterations of this discussion on the net) I explicitly rejected appearance
as a criterion. (The reasons are given elsewhere.) What is important in
the robotic version is that it should be a human DO-alike, not a human
LOOK-alike. I am claiming that the (Total) object-manipulative (etc.)
performance of humans cannot be generated by a basically symbolic
module that is merely connected with peripheral modules. I am
hypothesizing (a) that symbolic representations must be NONMODULARLY
(i.e., not independently) grounded in nonsymbolic representations, (b)
that the Total Turing Test requires the candidate to display all of
our robotic capacities as well as our linguistic ones, and (c) that
even the linguistic ones could not be accomplished unless grounded in
the robotic ones. In none of this do the particulars of what the robot
(or its grey matter!) LOOK like matter.
Two last observations. First, what the "proximal stimulus" -- i.e.,
the physical energy pattern on the transducer surface -- PRESERVES
whereas the next (A/D) step -- the digital representation -- LOSES, is
everything about the full PHYSICAL configuration of the energy pattern
that cannot be recovered by inversion (D/A). (That's what the ongoing
concurrent discussion about the A/D distinction is in part concerned
with.) Second, I think there is a tendency to overcomplicate the
issues involved in the turing test by adding various arbitrary
elaborations to it. The basic questions are fairly simply stated
(though not so simple to answer). Focusing instead on ornamented
variants often seems to lead to begging the question or changing the
subject.
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 1 Nov 86 20:02:08 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
michaelm@bcsaic.UUCP (michael maxwell) writes:
> I guess what I had in mind for the revised Turing test was not using
> language at all--maybe I should have eliminated the sound link (and
> writing). What in the way people behave (facial expressions, body
> language etc.) would cue us to the idea the one is a human and the other
> a robot? What if you showed pictures to the examinees--perhaps
> beautiful scenes, and revolting ones? This is more a test for emotions
> than for mind (Mr. Spock would probably fail). But I think that a lot of
> what we think of as human is tied up in this nonverbal/ emotional level.
The modularity issue looms large again. I don't believe there's an
independent module for affective expression in human beings. It's all
-- to use a trendy though inadequate expression -- "cognitively
penetrable." There's also the issue of the TOTALITY of the Total
Turing Test, which was intended to remedy the underdetermination of
toy models/modules: It's not enough just to get a model to mimic our
facial expressions. That could all be LITERALLY done with mirrors
(and, say, some delayed feedback and some scrambling and
recombining), and I'm sure it could fool people, at least for a while.
I simply conjecture that this could not be done for the TOTALITY of
our performance capacity using only more of the same kinds of tricks
(analog OR symbolic).
The capacity to manipulate objects in the world in all the ways
we can and do do it (which happens to include naming and describing
them, i.e., linguistic acts) is a lot taller order than mimicking exclusively
our nonverbal expressive behavior. There may be (in an unfortunate mixed
metaphor) many more ways to skin (toy) parts of the theoretical cat than
all of it.
Three final points: (1) Your proposal seems to equivocate between the (more
important) formal functional component of the Total Turing Test (i.e., how do
we get a model to exhibit all of our performance capacities, be they
verbal or nonverbal?) and (2) the informal, intuitive component (i.e., will it
be indistinguishable in all relevant respects from a person, TO a
person?). The motto would be: If you use something short of the Total
Turing Test, you may be able to fool some people some of the time, but not
all of the time. (2) There's nothing wrong in principle with a
nonverbal, even a nonhuman turing test; I think (higher) animals pass this
easily all the time, with virtually the same validity as humans, as
far as I'm concerned. But this version can't rely exclusively on
affective expression modules either. (3) Finally, as I've argued earlier,
all attempts to "capture" qualitative experience -- not just emotion,
but any conscious experience, such as what it's LIKE to see red or
to believe X -- amounts to an unprofitable red herring in this
enterprise. The whole point of the Total Turing Test is that
performance-indistinguishability IS your only basis for infer that
anyone but you has a mind (i.e., has emotions, etc.). In the paper I
dubbed this "methodological epiphenomenalism as aresearch strategy in
cognitive science."
By the way, you prejudged the question the way you put it. A perfectly
noncommittal but monistic way of putting it would be: "What in the way
ROBOTS behave would cue us to the idea that one robot had a mind and
another did not?" This leaves it appropriately open for continuing
research just exactly which causal physical devices (= "robots"), whether
natural or artificial, do or do not have minds.
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
End of AIList Digest
********************
∂08-Nov-86 0624 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #256
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 8 Nov 86 06:24:17 PST
Date: Thu 6 Nov 1986 21:59-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #256
To: AIList@SRI-STRIPE
AIList Digest Friday, 7 Nov 1986 Volume 4 : Issue 256
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 2 Nov 86 23:22:23 GMT
From: allegra!princeton!mind!harnad@ucbvax.Berkeley.EDU (Stevan
Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
The following is a response on net.ai to a comment on mod.ai.
Because of problems with posting to mod.ai, I am temporarily replying to net.ai.
On mod.ai cugini@NBS-VMS.ARPA ("CUGINI, JOHN") writes:
> You seem to want to pretend that we know absolutely nothing about the
> basis of thought in humans, and to "suppress" all evidence based on
> such knowledge. But that's just wrong. Brains *are* evidence for mind,
> in light of our present knowledge.
What I said was that we knew absolutely nothing about the FUNCTIONAL
basis of thought in humans, i.e., about how brains or relevantly
similar devices WORK. Hence we wouldn't have the vaguest idea if a
given lump of grey matter was in fact the right stuff, or just a
gelatenous look-alike -- except by examining its performance (i.e., turing)
capacity. [The same is true, by the way, mutatis mutandis, for a
better structural look-alike -- with cells, synapses, etc. We have no
functional idea of what differentiates a mind-supporting look-alike
from a comatose one, or one from a nonviable fetus. Without the
performance criterion the brain cue could lead us astray as often as
not regarding whether there was indeed a mind there. And that's not to
mention that we knew perfectly well (perhaps better, even) how to judge
whether somebody had a mind before 'ere we ope'd a skull nor knew what we
had chanced upon there.
If you want a trivial concession though, I'll make one: If you saw an
inert body totally incapable of behavior, then or in the future, and
you entertained some prior subjective probability that it had a mind, say,
p, then, if you opened its skull and found something anatomically and
physiologically brain-like in there, then the probability p that it
had, or had had, a mind would correspondingly rise. Ditto for an inert
alien species. And I agree that that would be rational. However, I don't
think that any of that has much to do with the problem of modeling the mind, or
with the relative strengths or weaknesses of the Total Turing Test.
> People in, say, 1500 AD were perfectly rational in predicting
> tides based on the position of the moon (and vice-versa)
> even though they hadn't a clue as to the mechanism of interaction.
> If you keep asking "why" long enough, *all* science is grounded on
> such brute-fact correlation (why do like charges repel, etc.) - as
> Hume pointed out a while back.
Yes, but people then and even earlier were just as good at "predicting" the
presence of mind WITHOUT any reference to the brain. And in ambiguous
cases, behavior was and is the only rational arbiter. Consider, for
example, which way you'd go if (1) an alien body persisted in behaving like a
clock-like automaton in every respect -- no affect, no social interaction,
just rote repetition -- but it DID have something that was indistinguishable
(on the minute and superficial information we have) from a biological-like
nervous system), versus (2) if a life-long close friend of yours had
to undergo his first operation, and when they opened him up, he turned
out to be all transistors on the inside. I don't set much store by
this hypothetical sci-fi stuff, especially because it's not clear
whether the "possibilities" we are contemplating are indeed possible. But
the exercise does remind us that, after all, performance capacity is
our primary criterion, both logically and intuitively, and its
black-box correlates have whatever predictive power they may have
only as a secondary, derivative matter. They depend for their
validation on the behavioral criterion, and in cases of conflict,
behavior continues to be the final arbiter.
I agree that scientific inference is grounded in observed correlations. But
the primary correlation in this special case is, I am arguing, between
mental states and performance. That's what both our inferences and our
intuitions are grounded in. The brain correlate is an additional cue, but only
inasmuch as it agrees with performance. As to CAUSATION -- well, I'm
sceptical that anyone will ever provide a completely satisfying account
of the objective causes of subjective effects. Remember that, except for
the special case of the mind, all other scientific inferences have
only had to account for objective/objective correlations (and [or,
more aptly, via) their subjective/subjective experiential counterparts).
The case under discussion is the first (and I think only) case of
objective/subjective correlation and causation. Hence all prior bets,
generalizations or analogies are off or moot.
> other brains... are, by definition, relevantly brain-like
I'd be interested in knowing what current definition will distinguish
a mind-supporting brain from a non-mind-supporting brain, or even a
pseudobrain. (That IS the issue, after all, in claiming that the brain
in an INDEPENDENT predictor of mindedness.)
> Let me re-cast Harnad's argument (perhaps in a form unacceptable to
> him): We can never know any mind directly, other than our own, if we
> take the concept of mind to be something like "conscious intelligence" -
> ie the intuitive (and correct, I believe) concept, rather than
> some operational definition, which has been deliberately formulated
> to circumvent the epistemological problems. (Harnad, to his credit,
> does not stoop to such positivist ploys.) But the only external
> evidence we are ever likely to get for "conscious intelligence"
> is some kind of performance. Moreover, the physical basis for
> such performance will be known only contingently, ie we do not
> know, a priori, that it is brains, rather than automatic dishwashers,
> which generate mind, but rather only as an a posteriori correlation.
> Therefore, in the search for mind, we should rely on the primary
> criterion (performance), rather than on such derivative criteria
> as brains. I pretty much agree with the above account except for the
> last sentence which prohibits us from making use of derivative
> criteria. Why should we limit ourselves so? Since when is that part
> of rationality?
I accept the form in which you've recast my argument. The reasons that
brainedness is not a good criterion are the following (I suppose I
should stop saying it is not a "rational" criterion having made the
minor concession I did above): Let's call being able to pass the Total
Turing Test the "T" correlate of having a mind, and let's call having a brain
the "B" correlate. (1) The validity of B depends completely on T. We
have intuitions about the way we and others behave, and what it feels
like; we have none about having brains. (2) In case of conflict
between T and B, our intuitions (rationally, I suggest) go with T rather
than B. (3) The subjective/objective issue (i.e., the mind/body
problem) mentioned above puts these "correlations" in a rather
different category from other empirical correlations, which are
uniformly objective/objective. (4) Looked at sufficiently minutely and
functionally, we don't know what the functionally relevant as opposed to the
superficial properties of a brain are, insofar as mind-supportingness
is concerned; in other words, we don't even know what's a B and what's
just naively indistinguishable from a B (this is like a caricature of
the turing test). Only T will allow us to pick them out.
I think those are good enough reasons for saying that B is not a good
independent criterion. That having be said, let me concede that for a
radical sceptic, neither is T, for pretty pretty much the same
reasons! This is why I am a methodological epiphenomenalist.
> No, the fact is we do have more reason to suppose mind of other
> humans than of robots, in virtue of an admittedly derivative (but
> massively confirmed) criterion. And we are, in this regard, in an
> epistemological position *superior* to those who don't/didn't know
> about such things as the role of the brain, ie we have *more* reason
> to believe in the mindedness of others than they do. That's why
> primitive tribes (I guess) make the *mistake* of attributing
> mind to trees, weather, etc. Since raw performance is all they
> have to go on, seemingly meaningful activity on the part of any
> old thing can be taken as evidence of consciousness. But we
> sophisticates have indeed learned a thing or two, in particular, that
> brains support consciousness, and therefore we (rationally) give the
> benefit of the doubt to any brained entity, and the anti-benefit to
> un-brained entities. Again, not to say that we might not learn about
> other bases for mind - but that hardly disparages brainedness as a
> rational criterion for mindedness.
A trivially superior position, as I've suggested. Besides, the
primitive's mistake (like the toy AI-modelers') is in settling for
anything less than the Total Turing Test; the mistake is decidedly NOT
the failure to hold out for the possession of a brain. I agree that it's
rational to take brainedness as an additional corroborative cue, if you
ever need one, but since it's completely useless when it fails to corroborate
or conflicts with the Total Turing criterion, of what independent use is it?
Perhaps I should repeat that I take the context for this discussion to
be science rather than science fiction, exobiology or futurology. The problem
we are presumably concerned with is that of providing an explanatory
model of the mind along the lines of, say, physics's explanatory model
of the universe. Where we will need "cues" and "correlates" is in
determining whether the devices we build have succeeded in capturing
the relevant functional properties of minds. Here the (ill-understood)
properties of brains will, I suggest, be useless "correlates." (In
fact, I conjecture that theoretical neuroscience will be led by, rather
than itself leading, theoretical "mind-science" [= cognitive
science?].) In sci-fi contexts, where we are guessing about aliens'
minds or those of comatose creatures, having a blob of grey matter in
the right place may indeed be predictive, but in the cog-sci lab it is
not.
> there's really not much difference between relying on one contingent
> correlate (performance) rather than another (brains) as evidence for
> the presence of mind.
To a radical sceptic, as I've agreed above. But there is to a working
cognitive scientist (whose best methodological stance, I suggest, is
epiphenomenalism).
> I know consciousness (my own, at least) exists, not as
> some derived theoretical construct which explains low-level data
> (like magnetism explains pointer readings), but as the absolutely
> lowest rock-bottom datum there is. Consciousness is the data,
> not the theory - it is the explicandum, not the explicans (hope
> I got that right). It's true that I can't directly observe the
> consciousness of others, but so what? That's an epistemological
> inconvenience, but it doesn't make consciousness a red herring.
I agree with most of this, and it's why I'm not, for example, an
"eliminative materialist." But agreeing that consciousness is data
rather than theory does not entail that it's the USUAL kind of data of
empirical science. I KNOW I have a mind. Every other instance is
radically different from this unique one: I can only guess, infer. Do
you know of any similar case in normal scientific inference? This is
not just an "epistemological inconvenience," it's a whole 'nother ball
game. If we stick to the standard rules of objective science (which I
recommend), then turing-indistinguishable performance modeling is indeed
the best we can aspire to. And that does make consciousness a red
herring.
> ...being-composed-of-protein might not be as practically incidental
> as many assume. Frinstance, at some level of difficulty, one can
> get energy from sunlight "as plants do." But the issues are:
> do we get energy from sunlight in the same way? How similar do
> we demand that the processes are?...if we're interested in simulation at
> a lower level of abstraction, eg, photosynthesis, then, maybe, a
> non-biological approach will be impractical. The point is we know we
> can simulate human chess-playing abilities with non-biological
> technology. Should we just therefore declare the battle for mind won,
> and go home? Or ask the harder question: what would it take to get a
> machine to play a game of chess like a person does, ie, consciously.
This sort of objection to a toy problem like chess (an objection I take to
be valid) cannot be successfully redirected at the Total Turing Test, and
that was one of the central points of the paper under discussion. Nor
are the biological minutiae of modeling plant photosynthesis analogous to the
biological minutiae of modeling the mind: The OBJECTIVE data in the
mind case are what you can observe the organism to DO. Photosynthesis
is something a plant does. In both cases one might reasonably demand
that a veridical model should mimic the data as closely as possible.
Hence the TOTAL Turing Test.
But now what happens when you start bringing in physiological data, in the
mind case, to be included with the performance data? There's no
duality in the case of photosynthesis, nor is there any dichotomy of
levels. Aspiring to model TOTAL photosynthesis is aspiring to get
every chemical and temporal detail right. But what about the mind
case? On the one hand, we both agree with the radical sceptic that
NEITHER mimicking the behavior NOR mimicking the brain can furnish
"direct" evidence that you've captured mind. So whereas getting every
(observable) photosynthetic detail right "guarantees" that you've
captured photosynthesis, there's no such guarantee with consciousness.
So there's half of the disanalogy. Now consider again the hypothetical
possibilities we were considering earlier: What if brain data and
behavioral data compete? Which way should a nonsceptic vote? I'd go
with behavior. Besides, it's an empirical question, as I said in the
papers under discussion, whether or not brain constraints turn out to
be relevant on the way to Total Turing Utopia. Way down the road,
after all, the difference between mind-performance and
brain-performance may well become blurred. Or it may not. I think the
Total Turing Test is the right provisional methodology for getting you
there, or at least getting you close enough. The rest may very well
amount to only the "fine tuning."
> BTW, I quite agree with your more general thesis on the likely
> inadequacy of symbols (alone) to capture mind.
I'm glad of that. But I have to point out that a lot of what you
appear to disagree about went into the reasons supporting that very
thesis, and vice versa.
-----
May I append here a reply to andrews@ubc-cs.UUCP (Jamie Andrews) who
wrote:
> This endless discussion about the Turing Test makes the
> "eliminative materialist" viewpoint very appealing: by the
> time we have achieved something that most people today would
> call intelligent, we will have done it through disposing of
> concepts such as "intelligence", "consciousness", etc.
> Perhaps the reason we're having so much trouble defining
> a workable Turing Test is that we're essentially trying to
> fit a square peg into a round hole, belabouring some point
> which has less relevance than we realize. I wonder what old
> Alan himself would say about the whole mess.
On the contrary, rather than disposing of them, we will finally have
some empirical and theoretical idea of what their functional basis
might be, rather than simply knowing what it's like to have them. And
if we don't first sort out our methodological constraints, we're not
headed anywhere but in hermeneutic circles.
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
Date: 3 Nov 86 17:41:54 GMT
From: mcvax!ukc!warwick!rlvd!kgd@seismo.css.gov (Keith Dancey)
Subject: Re: Searle, Turing, Symbols, Categories
In article <5@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>
>What do you think "having intelligence" is? Turing's criterion
>effectively made it: having performance capacity that is indistinguishable
>from human performance capacity. And that's all "having a mind"
>amounts to (by this objective criterion). ...
At the risk of sidetracking this discussion, I don't think it wise to try
and equate 'mind' and 'intelligence'. A 'mind' is an absolute thing, but
'intelligence' is relative.
For instance, most people would, I believe, accept that a monkey has a
'mind'. However, they would not necessarily so easily accept that a
monkey has 'performance capacity that is indistinguishable from human
performance capacity'.
On the other hand, many people would accept that certain robotic
processes had 'intelligence', but would be very reluctant to attribute
them with 'minds'.
I think there is something organic about 'minds', but 'intelligence' can
be codified, within limits, of course.
I apologise if this appears as a red-herring in the argument.
--
Keith Dancey, UUCP: ..!mcvax!ukc!rlvd!kgd
Rutherford Appleton Laboratory,
Chilton, Didcot, Oxon OX11 0QX
JANET: K.DANCEY@uk.ac.rl
Tel: (0235) 21900 ext 5716
------------------------------
End of AIList Digest
********************
∂08-Nov-86 0803 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #257
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 8 Nov 86 08:03:08 PST
Date: Fri 7 Nov 1986 22:25-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #257
To: AIList@SRI-STRIPE
AIList Digest Saturday, 8 Nov 1986 Volume 4 : Issue 257
Today's Topics:
Administrivia - Net.ai is Being Renamed Comp.aim
Queries - TCPIP Between Symbolics, Xerox 1100s and VAXs &
Connectionism & Neural Modeling,
Learning - Boltzmann Machines and Simulated Annealing,
Expert Systems - Performance Analysis/Tuning Query,
Education - Cognitive Science Grad Schools,
Logic - Nonmonotonic Reasoning,
Literature - Sentient-Computer Novels & The AI Gang
----------------------------------------------------------------------
Date: 7 Nov 86 20:01:25 GMT
From: cbosgd!mark@ucbvax.Berkeley.EDU (Mark Horton)
Subject: net.ai is being renamed comp.ai
This newsgroup is being renamed from net.ai to comp.ai.
This renaming will gradually take place over the next few weeks.
More and more messages posted to this newsgroup will be aliased
into the new newsgroup as they pass through the net, and people
will begin to post to the new group. After a few weeks, the
old name will be removed.
This note is to inform you of the renaming so you can begin to
read the new group as well as the old group.
Mark Horton
Director, the UUCP Project
------------------------------
Date: 4 Nov 86 19:29:27 GMT
From: ihnp4!ihwpt!clarisse@ucbvax.Berkeley.EDU (olivier clarisse)
Subject: Do YOU TCPIP between: Symbolics, Xerox 1100's and VAX's?
Does anyone of you AI WORKSTATION USERS work on a local network
(ethernet) running TCPIP and have used SYMBOLICS 3600s and (or)
VAXes running UNIX (system 5 for example) as hosts for FTP to
XEROX 1186 (or 110X)?
IF YOU HAVE experienced such things, please let me now how it goes:
great or terrible? Is the communication smooth (or like a dotted
line?) Do you use the SYMBOLICS (VAX) as a file server? Have you
purchased a software to be able to use the 110X as a host too?
Which one? (The 110X as a host is not supported on TCPIP by XEROX,
I just heard, while FTP is supposed if someone else is the host.)
Please let me know about your exciting experiences with TCPIP/FTP
and be as specific as possible with respect to the system/software
used. THANKS AN 1186 TIMES!
Olivier Clarisse
clarisse@ihesa@ihnp4.uucp
(312) 979-3558
------------------------------
Date: 7 Nov 86 22:06:21 GMT
From: mcvax!cernvax!ethz!wyle@seismo.css.gov (Mitchell Wyle)
Subject: Connectionism, neural networks: new mail list or group?
Is anyone interested in a net.connectionism group? What about a mailing
list? If anyone is interested in contributing to or receiving a
tentative bibliography of connectionism/neural nets, let me know.
~~~~~~~~~~~~~~~~~~~~~~
Mitch Wyle ...!decvax!seismo!mcvax!cernvax!ethz!Wyle
Institut fuer Informatik Wyle%ifi.ethz.cernvax.<network of your choice>
ETH / SOT
8092 Zuerich, Switzerland "Ignore alien orders."
------------------------------
Date: 4 Nov 86 11:25:18 GMT
From: mcvax!ukc!dcl-cs!strath-cs!jrm@seismo.css.gov (Jon R Malone)
Subject: Request for information (Brain/Parallel fibers)
<<<<Lion eaters beware>>>>
Nice guy, into brains would like to meet similiarly minded people.
Seriously : considering some simulation of neural circuits. Would
like pointers to any REAL work that is going on (PS I have read the
literature).
Keen to run into somebody that is interested in simulation at a low-level.
Specifically:
* mossy fibers/basket cells/purkyne cells
* need to find out parallel fiber details:
* length of
* source of/destination of
Any pointers or info would be appreciated.
------------------------------
Date: 4 Nov 86 18:40:15 GMT
From: mcvax!ukc!stc!datlog!torch!paul@seismo.css.gov (paul)
Subject: Re: THINKING COMPUTERS ARE A REALITY (?)
People who read the original posting in net.general (and the posting about
neural networks in this newsgroup) may be interested in the following papers:
Boltzmann Machines: Constraint Satisfaction Networks that Learn.
by Geoffrey E. Hinton, Terrence J. Sejnowski and David H. Ackley
Technical Report CMU-CS-84-119
(Carnegie-Mellon University May 1984)
Optimisation by Simulated Annealing
by S. Krikpatrick, C.D.Gelatt Jr., M.P.Vecchi
Science Vol. 220 No. 4598 (13th May 1983).
...in addition to those recommended by Jonathan Marshall.
Personally I regard this type of machine learning as something of a holy grail.
In my opinion (and I stress that it IS my own opinion) this is THE way to
get machines that are both massively parallel and capable of complex tasks
without having a programmer who understands the in's and out's of the task
to be accomplished and who is prepared to spend the time to hand code (or
design) the machine necessary to do it. The only reservation I have is whether
or not the basic theory behind Boltzmann machines is good enough.
Paul.
------------------------------
Date: Fri 7 Nov 86 12:30:49-MST
From: Sue Tabron <TABRON@SIMTEL20.ARPA>
Subject: expert systems for performance analysis/tuning?
I am interested in finding public domain or commercial expert systems
that can be used to analyze and tune the performance of computer
systems. I would like to hear from anyone with experience in this
area or who is developing these applications..
Mott Given <tabron@simtel20>
(614)238-9431
------------------------------
Date: 6 Nov 86 17:51:11 GMT
From: tektronix!orca!tekecs!mikes@ucbvax.Berkeley.EDU (Michael
Sellers)
Subject: choosing grad schools
[Note: This is a new subject. The words "Turing", "Searle", "ducks",
and "categories" do not appear in this posting...okay, so I lied :-)]
There was a little discussion some time ago regarding grad programs
in cognitive science. Well, its that time of year when I begin to
dream about selling the house and going for the old P, h, & D. So: For
those of you who are in doctrate programs (or master's programs, too)
in cognitive science, how did you choose the program you're in? What
do you like/dislike about it? What are your employment prospects when
you're done? What sorts of things drove your decision of what school
to go to? What is your personal situation (single/married x number of
kids, x years work experience, etc)? What I'm hoping to get is an idea
of what the various programs are like from the inside; I can get all the
propoganda I can stomach from various admissions offices.
Thanks for your help. Post or e-mail as you want; if there is a lot
of mail I'll summarize and post it.
--
Mike Sellers
UUCP: {...your spinal column here...}!tektronix!tekecs!mikes
"In a quiet moment, you can just hear them brain cells a-dyin'"
------------------------------
Date: 7 Nov 86 15:34:47 GMT
From: sdics!norman@sdcsvax.ucsd.edu (Donald A. Norman)
Subject: Re: choosing grad schools
(Weird that so many Cognitive Science issues end up in the Cognitive
Engineering and AI newsgroups. Cog-Eng was originally human-computer
interaction (the engineering, applied side of studies of cognition).
As for AI, well, the part that deals with the understanding and
simulation of thought is a subset of Cognitive Science, so it
belongs.)
Grad schools in Cognitive Science. I would ike to hear a summary
(from knowledgable folks) of what exists. Here is what I know.
There are NO departments of Cognitive Science.
I know of only three places that offer degrees that have the phrase
"Cognitive Science" in them (and 3 more that might, but I am not
sure). The three I know of are Brwon, MIT, and UC San Diego (UCSD).
The three I am not sure about are Rochester, SUNY Buffalo, and UC,
Berkeley (UCB).
Brown has a department of Linguistics and Cognitive Science. MIT has
a department of Brain and Cognitive Science. UCSD has a "program in
Cognitive Science" that offers a degree that reads "PhD in X and
Cogntive Science", where X is one of the particpating departments
(Anthropology, Computer Science, Linguistics, Music, Neuroscience,
Philosophy, Psychology, Sociology)
Rochester, SUNY Buffalo, and UCB have programs that might also offer
some sort of degree, but I am not certain. Many other places have
research programs in Cognitive Science, but as far as I know, no
degree program.
The UCSD program, for example, does not admit students directly into
the program (we can't: we are not a department). Students apply to
and are admitted into one of the coperating departments. At the end
of the first year of stuidy, they apply to and enter the joint program
with Cog Sci. At the end, the degree reads "PhD in X and Cognitive
Science."
There is a major debate in the Cognitive Science community over
whether or it is is premature to offer PhDs in Cognitive Science.
There are no departments yet (so no jobs in academia) and most
industry has not heard of the concept. (There are some exceptions in
industry, mostly the major research labs (Xerox PARC, IBM, MCC, Bell
Labs, Bellcore).
UCSD is considering starting a department. The Dean is establishing a
high powered committee to look over the program and make
recommendations. It would take from 2 to 5 years to get a department.
(Establishing a new department in a new field is a major undertaking.
And it requires approval of umpteen campus committees, umpteen
state-wide committees, a state overseeing body for higher education in
general, the regents, and probably the US senate.)
I would appreciate hearing updates from people associated with other
programs/departments/groups in Cognitive Science.
Donald A. Norman
Institute for Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093
norman@nprdc.arpa norman@ics.ucsd.EDU
------------------------------
Date: Thu 6 Nov 1986 10:17:14
From: ether.allegra%btl.csnet@RELAY.CS.NET
Subject: non-monotonic reasoning
John Nagle, in a recent posting, writes:
> Non-monotonic reasoning is an attempt to make reasoning systems
> less brittle, by containing the damage that can be caused by
> contradiction in the axioms. The rules of inference of non-monotonic
> reasoning systems are weaker than those of traditional logic.
Most nonmonotonic reasoning formalisms I know of (default logic,
autoepistemic logic, circumscription, NML I and II, ...) incorporate
a first-order logic as a subset. Their rules of inference are thus
*stronger* than traditional logics'. I think Nagle is thinking of
Relevance Logic (see Anderson & Belnap), which does make an effort
to contain the effects of contradiction by weakening the inference
rules (avoiding the paradoxes of implication).
As for truth-maintenance systems, contrary to Nagle and popular
mythology, these systems typically do *not* avoid contradictions
per se. What they *do* do is prevent one from 'believing' all
of a set of facts explicitly marked as contradictory by the
system using the TMS. These systems don't usually have any
deductive power at all, they are merely constraint satisfaction
devices.
David W. Etherington
AT&T Bell Laboratories
600 Mountain Avenue
Murray Hill, NJ 07974-2070
ether%allegra@btl.csnet
------------------------------
Date: 7 Nov 86 23:06:28 GMT
From: voder!lewey!evp@ucbvax.Berkeley.EDU (Ed Post)
Subject: Re: Canonical list of sentient computer novels
> Xref: lewey net.sf-lovers:5135 net.ai:549
>
>
>
> I am trying to compile a canonical list of SF *novels* dealing with (1)
> sentient computers, and (2) human mental access to computers or computer
> networks.....
Some of the classics:
RUR (Rossum's Universal Robots), Carel Capek(?)
Asimov's entire robot series
When Harlie was One, David Gerrold
The Moon is a Harsh Mistress, Robert Heinlein
Colossus (sp?), The Forbin Project
--
Ed Post {hplabs,voder,pyramid}!lewey!evp
American Information Technology
10201 Torre Ave. Cupertino CA 95014
(408)252-8713
------------------------------
Date: Thu, 6 Nov 86 09:40 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: AI in Literature
AIList used to include frequent notes about how AI was being presented in
literature, movies and TV shows. I just ran across a new wrinkle.
My daughter recently bought several paperbacks published by New American
Library (Signet) in a series called "The AI Gang". Here is the text from
the jacket of the first book in the series, "Operation Sherlock":
"Five whiz kids who call themselves the AI gang -- for Artificial
Intelligence -- accompany their scientist parents to a small secluded
island. While their parents are teaching a secret computer to think for
itself, the kids try their hand at programming a sleuthing computer named
Sherlock. They soon discover that there is an evil spy out to destroy their
parents' project. When three of the gang are almost killed in an explosion,
the kids and their specially developed crime computer must race against time
to reveal the spy's identity ... before all of them are blown to smithereens
..."
My daughter thought all of the books in the series were pretty good, btw.
Tim
------------------------------
End of AIList Digest
********************
∂12-Nov-86 0126 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #258
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 12 Nov 86 01:26:17 PST
Date: Tue 11 Nov 1986 23:18-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #258
To: AIList@SRI-STRIPE
AIList Digest Wednesday, 12 Nov 1986 Volume 4 : Issue 258
Today's Topics:
Query - Uncertainty and Belief in Expert Systems,
Seminars - Is Probability Adequate? (MIT) &
Uncertain Data Management (IBM) &
Qualitative Reasoning about Mechanisms (SMU) &
Diagnostic Systems (SMU) &
Programming Descriptive Analogies by Example (Rutgers) &
Explicit Contextual Knowledge in Learning (Rutgers) &
Verification in Model-Based Recognition (MIT) &
Formalizing the Notion of Context (SU),
Conference - 2nd Knowledge Acquisition Workshop
----------------------------------------------------------------------
Date: 5 NOV 86 20:37-EST
From: PJURKAT%SITVXA.BITNET@WISCVM.WISC.EDU
Subject: SPRING RESEARCH SEMINAR AT STEVENS INSTITUTE OF TECHNOLOGY
In association with two other faculty members of the Department of Management,
I plan to offer a semester long research seminar, in the Spring 1987 semester,
entitled
REPRESENTATION OF UNCERTAINTY AND BELIEF IN EXPERT SYSTEMS
To be covered are representations based on Bayesian theory, statistical
inference and sampling distributions, discriminant functions, Schafer's theory
of evidence, and fuzzy set theory. Participants will be asked to concentrate
on finding and testing evidence which supports (or not) any of these theories
as actually being related to the way experts deal with uncertainty and belief.
The other faculty will review the representation work of cognitive science and
experimental psychology.
This note is to ask readers to pass on any recent work in these areas,
particulary any experimental evidence on the actual workings of experts.
We have the Kahneman, Slovic and Tversky book "Judgment under uncertainty:
Heuristics and biases", published in 1982.
I will post any interesting ideas and work that comes out of the seminar.
Thank you for your consideration. Peter Jurkat (pjurkat@sitvxa)
------------------------------
Date: Wed 5 Nov 86 15:50:40-EST
From: Rosemary B. Hegg <ROSIE@XX.LCS.MIT.EDU>
Subject: Seminar - Is Probability Adequate? (MIT)
UNCERTAINTY SEMINAR ON MONDAY
Date: Monday, November 10, 1986
Time: 3.45 pm...Refreshments
4.00 pm...Lecture
Place: NE43-512A
UNCERTAINTY IN AI:
IS PROBABILITY EPISTEMOLOGICALLY AND HEURISTICALLY ADEQUATE?
MAX HENRION
Carnegie Mellon
New schemes for representing uncertainty continue to
proliferate, and the debate about their relative merits seems to
be heating up. I shall examine several criteria for comparing
probabilistic representations to the alternatives. I shall
argue that criticisms of the epistemological adequacy of
probability have been misplaced. Indeed there are several
important kinds of inference under uncertainty which are
produced naturally from coherent probabilistic schemes, but are
hard or impossible for alternatives. These include combining
dependent evidence, integrating diagnostic and predictive
reasoning, and "explaining away" symptoms. Encoding uncertain
knowledge in predictive or causal form, as in Bayes' Networks,
has important advantages over the currently more popular
diagnostic rules, as used in Mycin-like systems, which confound
knowledge about the domain and about inference methods.
Suggestions that artificial systems should try to simulate human
inference strategies, with all their documented biases and
errors, seem ill-advised. There is increasing evidence that
popular non-probabilistic schemes, including Mycin Certainty
Factors and Fuzzy Set Theory, perform quite poorly under some
circumstances. Even if one accepts the superiority of
probability on epistemological grounds, the question of its
heuristic adequacy remains. Recent work by Judea Pearl and
myself uses stochastic simulation and probabilistic logic for
propagating uncertainties through multiply connected Bayes'
networks. This aims to produce probabilistic schemes that are
both general and computationally tractable.
HOST: PROF. PETER SZOLOVITS
------------------------------
Date: Thu, 06 Nov 86 10:19:35 PST
From: IBM Almaden Research Center Calendar <CALENDAR@IBM.COM>
Subject: Seminar - Uncertain Data Management (IBM)
IBM Almaden Research Center
650 Harry Road
San Jose, CA 95120-6099
UNCERTAIN DATA MANAGEMENT
L. A. Zadeh, Computer Science Division, University of California, Berkeley
Computer Science Sem. Wed., Nov. 12 10:00 A.M. Room: Rear Audit.
The issue of data uncertainty has not received much attention in the
literature of database management even though the information resident
in a database is frequently incomplete, imprecise or not totally
reliable. Classical probability-based methods are of limited
effectiveness in dealing with data uncertainty, largely because the
needed joint probabilities are not known. Among the approaches which
are more effective are (a) support logic programming which is
Prolog-based, and (b) probabilistic logic. In our approach,
uncertainty is modeled by (a) allowing the entries in a table to be
set-values or, more generally, to be characterized as possibility
distributions, and (b) interpreting a column as a source of evidence
which may be fused with other columns. This model is closely related
to the Dempster-Shafer theory of evidence and provides a conceptually
simple method for dealing with some of the important types of
uncertainty. In its full generality, the problem of uncertain data
management is quite complex and far from solution at this juncture.
Host: S. P. Ghosh
------------------------------
Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Seminar - Qualitative Reasoning about Mechanisms (SMU)
Dr. Benjamin Kuipers, Qualitative Reasoning About Mechanisms
10:00 AM, Friday, 7 November 1986
The first generation of diagnostic expert system is based on a simple
model of knowledge: weighted links between observations and diagnoses.
Experience with these systems has revealed a number of limitations in
their performance due to the fact that they do not understand the
mechanism by which a particular fault causes the associated
observations. Recently developed methods for qualitative reasoning
about these underlying mechanisms show promise of being able to extend
the understanding, and hence the power, of diagnostic systems. The
fundamental inference in qualitative reasoning derives the behavior of
a mechanism from a description of its structure. Since both structure
and behavior are represented in qualitative terms, this is essentially
a qualitative abstraction of differential equations. I will derive in
detail the QSIM approach to qualitative reasoning, and demonstrate a
medical example in which QSIM predicts the behavior of a healthy
mechanism, the "broken" mechanism corresponding to a particular
disease, and the response of that broken mechanism to therapy.
------------------------------
Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Seminar - Diagnostic Systems (SMU)
Dr. William P. C. Ho
Department of Computer Science and Engineering
Southern Methodist University
IEEE Computer Society Meeting, October 23, 1986
Diagnosis is the process of determining the cause (set of one or more
physical component faults - "hypothesis" give the effect (set of one
or more behavior deviations - "signature"), for a given mechanism.
Ambiguity in interpreting fault signatures is the diagnosis problem.
I am developing an approach for functional diagnosis of multiple
component faults in mechanisms based on the "constraint satisfaction"
paradigm (as opposed to "heuristic search" of "hypothesize and test").
Component faults and behavior deviations are both represented
qualitatively by a set of 5 possible state values. Diagnostic
reasoning is performed with these representations based on an effect
calculus which combines more than one single fault effect into one
single multiple fault effect quickly, without simulation. Diagnostic
reasoning, encapsulated in a set of logical inference rules, is used
to generate constraints, as implications of observed effects, which
prune away subspaces of inconsistent hypotheses. The result is a
complete set of consistent hypotheses which can explain all of the
observed effects.
------------------------------
Date: 9 Nov 86 14:00:17 EST
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Seminar - Programming Descriptive Analogies by Example
(Rutgers)
On Tuesday November 25th, Henry Lieberman of MIT will speak on
"Programming Descriptive Analogies by Example". The abstract follows.
(The exact time will be decided later - it will probably be
10 AM in Hill-250.)
Programming Descriptive Analogies By Example
Henry Lieberman
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
This paper describes a system for "programming by analogy", called Likewise.
Using this new approach to interactive knowledge acquisition, a programmer
presents specific examples and points out which aspects of the examples are
"slippable" to more general situations. The system constructs a general
rule which can then be applied to "analogous" examples. Given a new
example, the system can then construct an analogy with the old example by
trying to instantiate new descriptions which correspond to the descriptions
constructed for the first example. If a new example doesn't fit an old
concept exactly, a concept can be generalized or specialized incrementally
to make the analogy go through. Midway between "programming by example" and
inductive inference programs, Likewise attacks the more modest goal of being
able to communicate to the computer an analogy which is already understood
by a person. Its operation on a typical concept learning task is presented
in detail.
------------------------------
Date: 9 Nov 86 15:44:30 EST
From: Smadar <KEDAR-CABELLI@RED.RUTGERS.EDU>
Subject: Seminar - Explicit Contextual Knowledge in Learning (Rutgers)
Reminder: Dissertation Defense for Rich Keller
Time and Place: Thursday, Nov. 13, 1:30 p.m., Hill 423
Committee: Tom Mitchell (chair)
Thorne McCarty
Lou Steinberg
Jack Mostow
Abstract:
The Role of Explicit Contextual Knowledge in
Learning Concepts to Improve Performance
Richard M. Keller
(KELLER@RED.RUTGERS.EDU)
This dissertation addresses some of the difficulties encountered
when using artificial intelligence-based, inductive concept learning
methods to improve an existing system's performance. The underlying
problem is that inductive methods are insensitive to changes in the
system being improved by learning. This insensitivity is due to the
manner in which contextual knowledge is represented in an inductive
system. Contextual knowledge consists of knowledge about the context
in which concept learning takes place, including knowledge about the
desired form and content of concept descriptions to be learned (target
concept knowledge), and knowledge about the system to be improved by
learning and the type of improvement desired (performance system
knowledge). A considerable amount of contextual knowledge is
"compiled" by an inductive system's designers into its data structures
and procedures. Unfortunately, in this compiled form, it is difficult
for the learning system to modify its contextual knowledge to
accommodate changes in the learning context over time.
This research investigates the advantages of making contextual
knowledge explicit in a concept learning system by representing that
knowledge directly, in terms of express declarative structures. The
thesis of this research is that aside from facilitating adaptation to
change, explicit contextual knowledge is useful in addressing two
additional problems with inductive systems. First, most inductive
systems are unable to learn approximate concept descriptions, even
when approximation is necessary or desirable to improve performance.
Second, the capability of a learning system to generate its own
concept learning tasks appears to be outside the scope of current
inductive systems.
To investigate the thesis, this study introduces an alternative
concept learning framework -- the concept operationalization framework
-- that requires various types of contextual knowledge as explicit
inputs. To test this new framework, an existing inductive concept
learning system (the LEX system [Mitchell et al. 81]) was rewritten as
a concept operationalization system (the MetaLEX system). This
dissertation describes the design of MetaLEX and reports results of
several experiments performed to test the system. Results confirm the
utility of explicit contextual knowledge, and suggest possible
improvements in the representations and methods used by the system.
------------------------------
Date: Mon, 10 Nov 1986 21:08 EST
From: JHC%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Seminar - Verification in Model-Based Recognition (MIT)
THE USE OF VERIFICATION IN MODEL-BASED RECOGNITION
David Clemens, MIT AI Lab
The recognition of objects in images involves a gigantic and complex
search through a library of models. Even for a single model, the
correspondence between parts of the model and parts of the image can
be difficult, especially if parts of the object may be occluded in the
image. Verification is a general search strategy which can reduce the
amount of processing required to find the best image/model match, but
it cannot guarantee that the best match has been found. Verification
is the Test phase of the familiar Hypothesize and Test paradigm, and
is commonly used in the last stages of recognition to weed out final
hypotheses. However, the concept can be applied more generally and
used to drive the recognition process at much earlier stages. Also
called "hypothesis-driven" recognition, this approach allows a more
focused search for evidence to support, invalidate, or modify a
hypothesis, thus decreasing the amount of data processed and improving
the accuracy of the interpretation. Unfortunately, it requires a
commitment to a finite set of initial hypotheses which must include an
early version of correct hypotheses. Thus, there are trade-offs
between hypothesis-driven modules and "data-driven" modules, which
simply process all data uniformly without committing to early
hypotheses. Several recognition systems will be discussed in this
context, demonstrating the strengths and weaknesses of the two basic
approaches applied to visual object recognition.
Thursday, November 13, 4pm
NE43 8th floor playroom
------------------------------
Date: 10 Nov 86 1108 PST
From: Vladimir Lifschitz <VAL@SAIL.STANFORD.EDU>
Subject: Seminar - Formalizing the Notion of Context (SU)
Commonsense and Non-Monotonic Reasoning Seminar
FORMALIZING THE NOTION OF CONTEXT
John McCarthy
Thursday, November 13, 4pm
MJH 252
Getting a general database of common sense knowledge and
expressing it in logic requires formalizing the notion of context.
Since no context is absolutely general, any context must be elaboration
tolerant and we discuss this notion. Another formalism that seems
useful involves entering and leaving contexts; this is a generalization
of natural deduction.
------------------------------
Date: Mon, 10 Nov 86 10:57:39 pst
From: bcsaic!john@june.cs.washington.edu
Subject: Conference - 2nd Knowledge Acquisition Workshop
Call for Participation:
2ND KNOWLEDGE ACQUISITION FOR KNOWLEDGE-BASED SYSTEMS WORKSHOP
Sponsored by the:
AMERICAN ASSOCIATION FOR ARTIFICIAL INTELLIGENCE (AAAI)
Banff, Canada
October 19-23, 1987
A problem in the process of building knowledge-based systems is acquiring
appropriate problem solving knowledge. The objective of this workshop is to
assemble theoreticians and practitioners of AI who recognize the need for
developing systems that assist the knowledge acquisition process.
To encourage vigorous interaction and exchange of ideas the workshop will be
kept small - about 40 participants. There will be individual presentations
and ample time for technical discussions. An attempt will be made to define
the state-of-the-art and the future research needs. Attendance will be
limited to those presenting their work, one author per paper.
Papers are invited for consideration in all aspects of knowledge acquisition
for knowledge-based systems, including (but not restricted to)
o Transfer of expertise - systems that obtain knowledge from experts.
o Transfer of expertise - manual knowledge acquisition methods and
techniques.
o Apprenticeship learning systems.
o Issues in cognition and expertise that affect the knowledge
acquisition process.
o Induction of knowledge from examples.
o Knowledge acquisition methodology and training.
Five copies of an abstract (up to 8 pages) or a full-length paper (up to 20
pages) should be sent to John Boose before April 15, 1987. Acceptance notices
will be mailed by June 15. Full papers (20 pages) should be returned to the
chairman by September 15, 1987, so that they may be bound together for
distribution at the workshop.
Ideal abstracts and papers will make pragmatic or theoretical contributions
supported by a computer implementation, and explain them clearly in the
context of existing knowledge acquisition literature. Variations will be
considered if they make a clear contribution to the field (for example,
comparative analyses, major implementations or extensions, or other analyses
of existing techniques).
Workshop Co-chairmen:
Send papers via US mail to:
John Boose Brian Gaines
Advanced Technology Center Department of Computer Science
Boeing Computer Services University of Calgary
PO Box 24346 2500 University Dr. NW
Seattle, Washington, USA 98124 Calgary, Alberta, Canada T2N 1N4
Send papers via express mail to:
John Boose
Advanced Technology Center
Boeing Computer Services, Bldg. 33.07
2760 160th Ave. SE
Bellevue, Washington, USA 98008
Program and Local Arrangements Committee:
Jeffrey Bradshaw, Boeing Computer Services
B. Chandrasekaran, Ohio State University
Catherine Kitto, Boeing Computer Services
Sandra Marcus, Boeing Computer Services
John McDermott, Carnegie-Mellon University
Ryszard Michalski, University of Illinois
Mildred Shaw, University of Calgary
------------------------------
End of AIList Digest
********************
∂12-Nov-86 0350 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #259
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 12 Nov 86 03:49:57 PST
Date: Tue 11 Nov 1986 23:26-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE>
Reply-to: AIList@SRI-STRIPE
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #259
To: AIList@SRI-STRIPE
AIList Digest Wednesday, 12 Nov 1986 Volume 4 : Issue 259
Today's Topics:
Administrivia - Splitting the List,
Literature - Sentient-Computer Novels,
Query - Knowledge-Base Portability,
Logic Programming - Non-Monotonic Reasoning and Truth Maintenance,
Application - Robotic Snooker,
AI Tools - Franz Object-Oriented Packages &
TCP from Xerox to UNIX System V,
Ethics - Mathematics and Humanity & Why Train Machines &
AI and the Arms Race
----------------------------------------------------------------------
Date: Tue 11 Nov 86 09:18:24-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Reply-to: AIList-Request@SRI-AI.ARPA
Subject: Splitting the List
I recently received this request from an AIList reader:
If there is a way to -just- get the seminar announcements periodically
distributed to AIList, then I would like to be placed in that category.
If this is not possible, then I wish to be removed from AIList completely.
I have previously suggested that seminar and conference notices should be
split out as a separate list (at least as long as other traffic remains
so high), but no one has stepped forward to do the remailing. I haven't
the energy to maintain two distribution lists. Volunteers are welcome.
I'm sure there is still plenty of interest in other list topics. The
NL-KR@Rochester list is doing fine, forwarding a great many natural-language
messages that would not have appeared in AIList. IRList%VPI.CSNet has
likewise been successful with information-retrieval topics. AI-Ed@SUMEX
is alive and well. So is the Prolog Digest, which predates AIList.
One reason for splitting the AIList is to reduce Arpanet traffic, which has
been rather high lately, and to reduce costs for those who have to pay
for the transmissions. Another is to reduce the difficulty for the next
AIList moderator if I have to drop out. The best reason, though, is to
boost discussion of the topics that most interest you.
-- Ken Laws
------------------------------
Date: 7 Nov 86 20:57:17 GMT
From: gknight@ngp.utexas.edu (Gary Knight)
Subject: Canonical list of sentient computer novels
Clarification of earlier posting, which is repeated below:
1) No robot novels, please; just non-ambulatory computers; and
2) No short works, just novels.
---
I am trying to compile a canonical list of SF *novels* dealing with (1)
sentient computers, and (2) human mental access to computers or computer
networks. Examples of the two categories (and my particular favorites as well)
are:
A) SENTIENT COMPUTERS
The Adolescence of P-1, by Thomas J. Ryan
Valentina: Soul in Sapphire, by Joseph H. Delaney and Marc Stiegler
Cybernetic Samurai, by (I forget)
Coils, by Roger Zelazny
B) HUMAN ACCESS
True Names, by Vernor Vinge
Neuromancer and Count Zero, by William Gibson
Please send your lists to me by e-mail
and I'll compile and post the ultimate canonical version.
--
Gary Knight, 3604 Pinnacle Road, Austin, TX 78746 (512/328-2480).
Biopsychology Program, Univ. of Texas at Austin. "There is nothing better
in life than to have a goal and be working toward it." -- Goethe.
------------------------------
Date: 11 Nov 86 00:38:00 GMT
From: u1100a!toh@bellcore.com (Tom O. Huleatt)
Subject: Request for knowledge base portability info
[Sorry if you see this twice -- postnews gagged on comp.ai, so I resubmitted.]
Does anyone out there have experience with (or knowledge of)
Knowledge Base portability issues?
We have been using a home-grown rule-based system, and we
are concerned about protecting our knowledge engineering
investment as we move to other (more versatile) expert
system shells. (These new systems will probably be rule-based,
too.)
I only have experience with our current system, so I'm not sure
how much work is required to port one of our knowledge bases.
I'd also be interested to hear any tips about what we could be
doing with our knowledge bases now to increase their portability.
Please send me email with suggestions (or pointers to ref. material).
Thank you, Tom Huleatt [bellcore, ihnp4, pyuxww, allegra]!u1100a!toh
Bell Communications Research
Piscataway, NJ 08854 (201) 699-4506
------------------------------
Date: Mon, 10 Nov 86 18:41:39 PST
From: Tom Dietterich <tgd%oregon-state.csnet@RELAY.CS.NET>
Subject: Non-monotonic reasoning and truth maintenance systems
"These systems don't usually have any deductive power at all,
they are merely constraint satisfaction devices."
--David Etherington
I am confused by this last sentence. Isn't constraint satisfaction
a kind of inference? deKleer's ATMS and McAllester's RUP handle
large portions (maybe all?) of propositional logic.
--Tom Dietterich
Department of Computer Science
Oregon State University
Corvallis, OR 97331
tgd%oregon-state.csnet
------------------------------
Date: Mon, 10 Nov 86 09:36:46 GMT
From: Tony Conway <tc%vax-d.rutherford.ac.uk@Cs.Ucl.AC.UK>
Subject: Robotic Snooker
In article <861020-061334-1337@Xerox> MJackson.Wbst@XEROX.COM writes:
>
>Over the weekend I caught part of a brief report on this on Cable News
>Headlines. They showed a large robot arm making a number of impressive
>shots, and indicated that the software did shot selection as well.
>Apparently this work was done somewhere in Great Britain. Can someone
>provide more detail?
>
>Mark
Think that work was probably project by Richard Gregory (Brain & Perception
Laboratory, Medical School, University of Bristol, Bristol, England)
in conjunction with people in School of Engineering, Information Technology
Research Centre, University of Bristol.
Not sure if it has been written up anywhere yet.
Richard Gregory is also active in starting up an interactive science
centre (Bristol Exploratory): loosely based on the San Francisco
Exploratorium.
Cheers - 'Tony Conway ( @ucl-cs.arpa:tc@vd.rl.ac.uk )
Informatics, SERC Rutherford Appleton Laboratory,
Chilton, Didcot, Oxon. OX11 0QX, England.
------------------------------
Date: Mon, 10 Nov 86 13:03:12 PST
From: franz!fray!cox@ucbarpa.Berkeley.EDU (Charles A. Cox)
Subject: Franz Object-Oriented Packages
> Date: Wed, 5 Nov 86 13:08:28 EST
> From: weltyc%cieunix@CSV.RPI.EDU (Christopher A. Welty)
> Subject: Looking for Franz OO packages
>
> I am looking for information on Object Oriented extensions to
> Franz Lisp. I know that someone (U of Maryland?) came out with a flavors
> package for Franz, if someone can point me in the right direction there
> it would be appreciated, as well as any info on other packages...
Franz Inc. has a symbolics-compatible flavors package included in its
versions of Franz Lisp (after Opus 42.0).
I don't know much about the U of Maryland's system, but I believe they
ship an entire Franz Lisp system (Opus 38) which includes their flavors
package. The contact used to be Liz Allen (liz@tove.umd.edu).
Other extensions to UC Berkeley's Franz Lisp put in by Franz Inc.
include a common lisp compatible package system, multiple value returns,
keywords, and hash tables.
------------------------------
Date: Mon 10 Nov 86 14:57:20-PST
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: TCP from Xerox to UNIX System V
The TCP/IP package for Interlisp-D works for the most part, but
usually requires a bit of fiddling to make work with any particular partner.
Telnet generally works quite well with almost any host. I've used it
to talk to unix 4.2, 4.3, System V, TOPS-20, and LispM telnet servers.
FTP is a bit tricker and I usually have to run with the
FTPDEBUG window on to figure out what to do. Logical pathname
transformations are sometimes non-obvious and not all servers support
the same set of commands. Since you ask about System V, I'll note
that I've tested FTP against our Silicon Graphics Iris (System V,
Excellan ethernet board) and found it to work OK.
I don't use any TCPFTP server regularly, so I'm not the ideal
reviewer. For nitty-gritty workstation questions, I recommend
querying one of the workstation mailing lists rather than AIList. Eg.
Bug-1100@SUMEX-AIM.Stanford.edu for Xerox d-machines (of which I am the
moderator) SLUG@R20.UTexas.edu for Symbolics machines, or WorkS@Rutgers
for workstations without their own mailing lists.
--Christopher
------------------------------
Date: Fri, 7 Nov 86 14:54:26 EST
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@RELAY.CS.NET>
Subject: Re: Mathematics and humanity
In <8611050753.AA24198@ucbvax.Berkeley.EDU>, WADLISP7@CARLETON.BITNET writes:
> The inhumanity of *most* mathematics? I would think that from the rest of
> your message, what you would really claim is the inhumanity of *all*
> mathematics -- for *all* of mathematics is entirely deviod of the questions
> of what is morally right or morally wrong, entirely missing all matters of
> human relationships. Mathematical theorems start by listing the assumptions,
> and then indicating how those assumptions imply a result.
This is the specialized mathematician's view of mathematics. The point is
obviously sound, because mathematicians study mathematics as a thing apart.
On the other hand, the mathematics that a herdsman uses to count sheep be-
longs to the herdsman's life. It's not formally axiomatized, but it
is human, because it is bound up with the natural human activity of
growing food.
To reinforce the point, many unlettered herdsmen have special numbers
that they use ←only← for counting sheep. One can feel that to use
those numbers for counting other things would be to endow those things
with an inappropriate character of sheepliness. Modern mathematics
rests on ignoring such "human" distinctions. The equals sign is the
sine qua non of abstract mathematics--but it does not exist in human
lives.
The cry of "art for art's sake" produced generations of starving artists.
What can we foresee from "math for math's sake?"
------------------------------
Date: Thu, 6 Nov 86 21:35:59 EST
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@RELAY.CS.NET>
Subject: Re: Why train machines
In article <861027-093832-2927@Xerox>, Ghenis.pasa@XEROX.COM writes:
>
> Why do we record music instead of teaching everyone how to sing? To
> preserve what we consider top performance and make it easily available
> for others to enjoy, even if the performer himself cannot be present and
> others are not inclined to or capable of duplicating his work, but
> simply wish to benefit from it.
While I appreciate the point, it raises more questions....
1. Why do we preserve top "performance?" The process of recording music
redefines it as something perfectly repeatable--an effect that began
with the invention of musical notation; jazz, and afterwards composers
like Cage, tried to overturn this definition. But "performance" is also a
social phenomenon, as it distinguishes between producers and consumers.
The consequence of specialization is to retard progress by leaving the
production of music to relatively few people.
2. Has not recorded music become a separate medium in its own right?
Even a "faithful" recording involves a lot of electronic klugery.
Most popular recordings no longer sound like, or can be performed as,
live music.
The second point has implications for A.I.! If you had a robot slave,
how would you treat it? What would you become?
------------------------------
Date: Sat, 8 Nov 1986 13:38 EST
From: LIN@XX.LCS.MIT.EDU
Subject: AI and the Arms Race
[I posted a message from AILIST on ARMS-D, and got back this reply.]
Date: Saturday, 8 November 1986 12:55-EST
From: ihnp4!utzoo!henry at ucbvax.Berkeley.EDU
To: Arms-Discussion
Re: Professionals and Social Responsibility for the Arms Race
> ... This year, Dr. Weizenbaum of MIT was the chosen speaker...
> The important points of the second talk can be summarized as :
> 1) not all problems can be reduced to computation, for
> example how could you conceive of coding the human
> emotion loneliness.
I don't want to get into an argument about it, but it should be pointed
out that this is debatable. Coding the emotion of loneliness is difficult
to conceive of at least in part because we don't have a precise definition
of what the "emotion of loneliness" is. Define it in terms of observable
behavior, and the observable behavior can most certainly be coded.
> 2) AI will never duplicate or replace human intelligence
> since every organism is a function of its history.
This just says that we can't exactly duplicate (say) human intelligence
without duplicating the history as well. The impossibility of exact
duplication has nothing to do with inability to duplicate the important
characteristics. It's impossible to duplicate Dr. Weizenbaum too, but
if he were to die, I presume MIT *would* replace him. I think Dr. W. is
on very thin ice here.
> 5) technical education that neglects language, culture,
> and history, may need to be rethought.
Just to play devil's advocate, it would also be worthwhile to rethink
non-technical education that covers language, culture, and history while
completely neglecting the technological basis of our civilization.
> 8) every researcher should assess the possible end use of
> their own research, and if they are not morally comfortable
> with this end use, they should stop their research...
> He specifically referred to research in machine vision, which he
> felt would be used directly and immediately by the military for
> improving their killing machines...
I'm afraid this is muddy thinking again. *All* technology has military
applications. Mass-production of penicillin, a development of massive
humanitarian significance, came about because of massive military funding
in World War II, funding justified by the tremendous military significance
of effective antibiotics. (WW2 was the first major war in which casualties
from disease were fewer in number than those from bullets etc.) It's hard
to conceive of a field of research which doesn't have some kind of military
application.
Henry Spencer @ U of Toronto Zoology
{allegra,ihnp4,decvax,pyramid}!utzoo!henry
------------------------------
End of AIList Digest
********************
∂19-Nov-86 0039 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #260
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 19 Nov 86 00:38:49 PST
Date: Tue 18 Nov 1986 22:22-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #260
To: AIList@SRI-STRIPE.ARPA
AIList Digest Wednesday, 19 Nov 1986 Volume 4 : Issue 260
Today's Topics:
Seminars - A Robust Approach to Plan Recognition (CMU) &
Object-Oriented DBMSs (UPenn) &
The Capacity of Neural Networks (UPenn) &
BoltzCONS: Recursive Objects in a Neural Network (CMU) &
Insight in Human Problem Solving (CMU) &
Analogical and Deductive Reasoning (UCB) &
Planning and Plan Recognition in Office Systems (Rutgers) &
Logic Programming and Circumscription (SU)
----------------------------------------------------------------------
Date: 11 Nov 86 17:51:22 EST
From: Steven.Minton@k.cs.cmu.edu
Subject: Seminar - A Robust Approach to Plan Recognition (CMU)
This week's speaker is Craig Knoblock. Usual time and place, 3:15 in
7220.
Title: A Robust Approach to Plan Recognition
Abstract:
Plan recognition is the process of inferring an agent's plans and goals from
his actions. Most of the previous work on plan recognition has approached
this problem by first hypothesizing a single goal and then attempting to
match the actions with a plan for achieving that goal. Unfortuantely, there
are some types of problems where focusing on a single hypothesis will
mislead the system. I will present an architecture for plan recognition
that does not require the system to choose a single goal, but allows several
hypotheses to be considered simultaneously. This architecture uses an
assumption-based truth maintenance system to maintain both the observed
actions and the predictions about the agent's plans and goals.
------------------------------
Date: Thu, 13 Nov 86 00:16 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Object-Oriented DBMSs (UPenn)
DBIG Meeting
10:30 Friday November 14th
554 Moore School
University of Pennsylvania
DEVELOPMENT OF AN OBJECT-ORIENTED DBMS
David Maier
Oregon Graduate Center
and
Servio Logic Development Corp
GemStone is an object-oriented database server developed by Servio Logic
that supports a model of objects similar to that of Smalltalk. GemStone
provides complex objects with sharing and identity, specification of
behavioral aspects of objects, and an extensible data model. Those features
came with the choice of Smalltalk as a starting point for the data model and
its programming language, OPAL. However, Smalltalk is a single-user,
memory-based system, and requires significant modifications to provide a
multi-user, disk-based system with support for associative queries objects
of arbitrary size.
This presentation begins with a summary of the requirements for a database
system to support applications such as CAD, office automation and knowledge
bases. I next introduce the Smalltalk language and its data model, showing
how they satisfy some of the requirements, and indicating which remain to be
satisfied. I will outline the approach Servio took on the remaining
requirements, describing the techniques used for storage management,
concurrency, recovery, name spaces and associative access, as time permits.
------------------------------
Date: Thu, 13 Nov 86 23:12 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - The Capacity of Neural Networks (UPenn)
CIS Colloquium
University of Pennsylvania
3pm Tuesday November 18
216 Moore School
THE CAPACITY OF NEURAL NETWORKS
Santosh S. Venkatesh
University of Pennsylvania
Analogies with biological models of brain functioning have led to fruitful
mathematical models of neural networks for information processing. Models of
learning and associative recall based on such networks illustrate how
powerful distributed computational properties become evident as collective
consequence of the interaction of a large number of simple processing
elements (the neurons). A particularly simple model of neural network
comprised of densely interconnected McCulloch-Pitts neurons is utilized in
this presentation to illustrate the capabilities of such structures. It is
demonstrated that while these simple constructs form a complete base for
Boolean functions, the most cost-efficient utilization of these networks
lies in their subversion to a class of problems of high algorithmic
complexity. Specializing to the particular case of associative memory,
efficient algorithms are demonstrated for the storage of memories as stable
entities, or gestalts, and their retrieval from any significant subpart.
Formal estimates of the essential capacities of these schemes are shown. The
ultimate capability of such structures, independent of algorithmic
approaches, is characterized in a rigourous result. Extensions to more
powerful computational neural network structures are indicated.
------------------------------
Date: 12 November 1986 1257-EST
From: Masaru Tomita@A.CS.CMU.EDU
Subject: Seminar - BoltzCONS: Recursive Objects in a Neural Network
(CMU)
Time: 3:30pm
Place: WeH 5409
Date: 11/18, Tuesday
BoltzCONS: Representing and Transforming Recursive
Objects in a Neural Network
David S. Touretzky, CMU CSD
BoltzCONS is a neural network in which stacks and trees are implemented as
distributed activity patterns. The name reflects the system's mixed
representational levels: it is a Boltzmann Machine in which Lisp cons cell-like
structures appear as an emergent property of a massively parallel distributed
representation. The architecture employs three ideas from connectionist symbol
processing -- coarse coded distributed memories, pullout networks, and variable
binding spaces, that first appeared together in Touretzky and Hinton's neural
network production system interpreter. The distributed memory is used to store
triples of symbols that encode cons cells, the building blocks of linked lists.
Stacks and trees can then be represented as list structures, and they can be
manipulated via associative retrieval. BoltzCONS' ability to recognize shallow
energy minima as failed retrievals makes it possible to traverse binary trees
of unbounded depth nondestructively without using a control stack. Its two
most significant features as a connectionist model are its ability to represent
structured objects, and its generative capacity, which allows it to create new
symbol structures on the fly.
A toy application for BoltzCONS is the transformation of parse trees from
active to passive voice. An attached neural network production system contains
a set of rules for performing the transformation by issuing control signals to
BoltzCONS and exchanging symbols with it. Working together, the two networks
are able to cooperatively transform ``John kissed Mary'' into ``Mary was kissed
by John.''
------------------------------
Date: 14 Nov 86 10:16:55 EST
From: Jeffrey.Bonar@isl1.ri.cmu.edu
Subject: Seminar - Insight in Human Problem Solving (CMU)
An Interdiciplinary Seminar of the Computer Science Department
and the Learning Research and Development Center
UNIVERSITY OF PITTSBURGH
AN INFORMATION PROCESSING ARCHITECTURE
TO EXPLAIN INSIGHT IN HUMAN PROBLEM SOLVING
STELLAN OHLSSON
10:00 AM TO 11:00, FRIDAY, JANUARY 9TH, 1987
LRDC AUDITORIUM, SECOND FLOOR
REFRESHMENTS FOLLOWING
There are currently four models of symbolic computation which are in
frequent use in Cognitive Science work: applicative programming, logic
programming, rule-based programming, and object oriented (frame based)
programming. Each of these exhibit some general properties of human
information processing, but neglects others. For example, LISP contains a
model for the hiearchical structure of action, which Production Systems do not.
What is needed for the simulation of human cognition is a new architecture
which exhibits all of the properties which we know are characteristic of human
cognition, and which "has" them in a natural way. An attempt at defining such
an architecture will be presented. It has grown within a specific simulation
attempt, namely to understand formally what happens in so-called
"Aha"-experiences, moments of insight during problem solving. A theory has
been constructed which explains such events within the information processing
theory of problem solving as heuristic search. The theory is then implemented
within the architecture described. An example of a run of the system will be
described.
For more information, call Cathy Rupp 624-3950
------------------------------
Date: Mon, 17 Nov 86 13:55:23 PST
From: admin%cogsci.Berkeley.EDU@berkeley.edu (Cognitive Science
Program)
Subject: Seminar - Analogical and Deductive Reasoning (UCB)
BERKELEY COGNITIVE SCIENCE PROGRAM
Cognitive Science Seminar - IDS 237A
Tuesday, November 25, 11:00 - 12:30
2515 Tolman Hall
Discussion: 12:30 - 1:30
2515 Tolman Hall
``Analogical and Deductive Reasoning"
Stuart Russell
Computer Science
UC Berkeley
The first problem I will discuss is that of analogical reason-
ing, the inference of further similarities from known similari-
ties. Analogy has been widely advertised as a method for apply-
ing past experience in new situations, but the traditional
approach based on similarity metrics has proved difficult to
operationalize. The reason for this seems to be that it
neglects the importance of relevance between known and inferred
similarities. The need for a logical semantics for relevance
motivates the definition of determinations, first-order expres-
sions capturing the idea of relevance between generalized pro-
perties. Determinations are shown to justify analogical infer-
ences and single-instance generalizations, and to express an
apparently common form of knowledge hitherto neglected in
knowledge-based systems. Essentially, the ability to acquire
and use determinations increases the set of inferences a system
can make from given data. When specific determinations are
unavailable, a simple statistical argument can relate similar-
ity to the probability that an analogical solution is correct,
in a manner closely connected to Shepard's stimulus generaliza-
tion results. The second problem, suggested by and subsuming
the first, is to identify the ways in which existing knowledge
can be used to help a system to learn from experience. I
describe a simple method for enumerating the types of knowledge
(of which determinations are but one) that contribute to learn-
ing, so that the monolithic notion of confirmation can be
teased apart. The results find strong echoes in Goodman's work
on induction. The application of a logical, knowledge-based
approach to the problems of analogy and induction indicates the
need for a system to be able to detect as many forms of regu-
larity as possible in order to maximize its inferential capa-
bility. The possibility that important aspects of common sense
are captured by complex, abstract regularities suggests further
empirical research to identify this knowledge.
------------------------------
Date: 17 Nov 86 12:59:16 EST
From: BORGIDA@RED.RUTGERS.EDU
Subject: Seminar - Planning and Plan Recognition in Office Systems
(Rutgers)
Computer Science Department Colloquium
Date: Thursday November 20
Speaker: Professor Bruce Croft
Title: Planning and Plan Recognition in Office Systems
Affiliation: Department of Computer and Information Science,
University of Massachusetts, Amherst
Time: 10:00 a.m. [NOTE UNUSUAL TIME!!!]
Place: Hill 705
Note: Refreshments will be served at 9:50 a.m.
The office environment provides an ideal testbed for systems
that attempt to represent and support complex, semi-structured
and cooperative activities. It is typical to find a variety of
constraints at different levels of abstraction on activities,
objects manipulated by activities, and people that carry out the
activities. In this talk, we will discuss the use of planning and
plan recognition techniques to support an intelligent interface
for an office system. In particular, we emphasise the use of
object-based models, and the relationship between planning and
plan execution. The types of exceptions that can occur with
underconstrained plans will be described and some suggestions
made about techniques for handling them.
------------------------------
Date: 17 Nov 86 1037 PST
From: Vladimir Lifschitz <VAL@SAIL.STANFORD.EDU>
Subject: Seminar - Logic Programming and Circumscription (SU)
Commonsense and Non-Monotonic Reasoning Seminar
LOGIC PROGRAMMING AND CIRCUMSCRIPTION
Vladimir Lifschitz
Thursday, November 20, 4pm
MJH 252
The talk will be based on my paper "On the declarative semantics of
logic programs with negation". A few copies of the paper are available
in my office, MJH 362.
ABSTRACT. A logic program can be viewed as a predicate formula, and its
declarative meaning can be defined by specifying a certain Herbrand
model of that formula. For programs without negation, this model is
defined either as the Herbrand model with the minimal set of positive
ground atoms, or, equivalently, as the minimal fixed point of a certain
operator associated with the formula (Van Emden and Kowalski). These
solutions do not apply to general logic programs, because a program
with negation may have many minimal Herbrand models, and the corresponding
operator may have many minimal fixed points. Apt, Blair and Walker and,
independently, Van Gelder, introduced a class of general logic programs
which disallow certain combinations of recursion and negation, and showed
how to use the fixed point approach to define a declarative semantics for
such programs. Using the concept of circumscription, we extend the minimal
model approach to stratified programs and show that it leads to the same
semantics.
------------------------------
End of AIList Digest
********************
∂19-Nov-86 0234 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #261
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 19 Nov 86 02:34:27 PST
Date: Tue 18 Nov 1986 22:30-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #261
To: AIList@SRI-STRIPE.ARPA
AIList Digest Wednesday, 19 Nov 1986 Volume 4 : Issue 261
Today's Topics:
Queries - PEARL AI Package & Contextual Knowledge and Multilayer Learning &
Logic Programming in APL & Cornell Synthesizer/Generator,
Literature - Books Available for Review,
Logic Programming - Nonmonotonic Reasoning and Truth Maintenance Systems,
Science Fiction - Sentient Computers,
Education - Cognitive Science Degree Programs,
Ethics - AI and the Arms Race
----------------------------------------------------------------------
Date: Thu, 13 Nov 86 14:25:45 est
From: rochester!tropix!dls@seismo.CSS.GOV (David L. Snyder )
Reply-to: tropix!dls@seismo.CSS.GOV (David L. Snyder )
Subject: pearl AI package
A few questions about pearl (Package for Efficient Access to
Representations in Lisp):
Can anyone tell me what, if any, activity is going on with pearl these
days? (Is the pearl-bugs mailing list still active?) Has anyone used
it for non-toy problems? Any chance it'll be ported into common
lisp? Is there something better that superceeds it (and is in the
public domain)?
Thanks!
P.S. Try tropix!dls@rochester as an arpa address if other alternatives fail.
------------------------------
Date: Wed, 12 Nov 86 13:50 ???
From: MUKHOP%RCSJJ%gmr.com@RELAY.CS.NET
Subject: Contextual Knowledge and Multi-layer Learning
I read with interest the abstract for Richard M. Keller's talk, "The Role
of Explicit Contextual Knowledge in Learning Concepts to Improve Performance"
(V4 #258), part of which is reproduced below:
> This dissertation addresses some of the difficulties encountered
> when using artificial intelligence-based, inductive concept learning
> methods to improve an existing system's performance. The underlying
> problem is that inductive methods are insensitive to changes in the
> system being improved by learning. This insensitivity is due to the
> manner in which contextual knowledge is represented in an inductive
> system. Contextual knowledge consists of knowledge about the context
> in which concept learning takes place, including knowledge about the
> desired form and content of concept descriptions to be learned (target
> concept knowledge), and knowledge about the system to be improved by
> learning and the type of improvement desired (performance system
> knowledge).
> ...
> To investigate the thesis, this study introduces an alternative
> concept learning framework -- the concept operationalization framework
> -- that requires various types of contextual knowledge as explicit
> inputs.
>...
Isn't this described in the literature as a two-layer learning system
(multi-layer in the general case) of which Samuel's checkers player is
one of the earliest examples? What are the differences, if any?
Uttam Mukhopadhyay
GM Research Labs
------------------------------
Date: Mon, 17 Nov 86 14:45 EST
From: McHale@RADC-MULTICS.ARPA
Subject: Logic programming in APL
A while ago I heard of a system (from John Hopkins, I think) that combined
logic programming with APL called APLLog. I would appreciate any pointers
anyone could give me concerning this language. (User comments, software
availability, underlying hardware, point of contact, etc.)
Michael L. Mc Hale
RADC/COES
Griffiss AFB, NY 13441-5700
arpa% McHale RADC-Multics
------------------------------
Date: Tue, 18 Nov 86 16:00:56 pst
From: Neil O'Neill <oneill@lll-tis-b.ARPA>
Subject: Cornell Synthesizer/Generator (Need help)
Does anyone have experience in running the Cornell Synthesizer/Generator?
We could use some help with it if you know how to use it.
Neil J. O'Neill
ARPA: oneill@lll-tis-b.ARPA
UUCP: {ihnp4,dual,sun}!lll-lcc!styx!oneill
------------------------------
Date: Tue, 18 Nov 86 08:01:20 -0500
From: sriram@ATHENA.MIT.EDU
Subject: Books available for review
The following books are available for review for the International
Journal of AI in Engineering. If you are interested in acquiring
a copy (and review it too), send mail to sriram@athena.mit.edu with
your US mailing address. Please note that I have only single copies
and books will be handed out on a first come basis.
Machine Interpretation of Line Drawing
Kokichi Sugihara
MIT Press
Introduction to Robotics. Mechanics and control.
J. J. Craig
Addison-Wesley
Parallel Distributed Processing Vol 1
J.L. McClelland, D.E. Runelhart and the PDP research group
MIT Press
Paralell Distributed Processing Vol 2
J.L. McClelland,D.E. Runelhart and the PDP research group
MIT Press
The acquisition of Syntactic Knowledge
R.C. Berwick
MIT Press
Computational Model of Discounts
Edited by M. Brady and R. Berwick
MIT Press
Artificial Inteligence.The very Idea.
J. Haugeland
MIT Press
Systems that learn
D.N. Osherson, M. Srob and S. Weinstein
MIT Press
Robot Motion. Planning and control.
Edited by M. Brady, J.M. Hollerbach,T.KL. Johnson, T. Lozano-Pevec
MIT Press
The measurement of Visual Motion
Gllen Catherine Hildreth
MIT Press
A Geometric Investigation of Reach
J.V. Korein
MIT Press
Robot Hands and the Mechanics of Manipulation
M.T. Mason, J.K. Salisbury Jr
MIT Press
Theory and Practice of Robots and Manipulations
Edited by A. Morecks,G. Bianchi and K. Kedziar
MIT Press
Robot Manipulators.Mathematics, Programming and Control
Richard P. Paul
MIT Press
Expert Systems: Techniques, Tools and Applications
P. Khalr and D. Waterman
Addison-Wesley
------------------------------
Date: Thu, 13 Nov 86 10:06:11 est
From: Randy Goebel LPAIG
<rggoebel%watdragon.waterloo.edu@RELAY.CS.NET>
Subject: Re: Non-monotonic reasoning and truth maintenance systems
> "These systems don't usually have any deductive power at all,
> they are merely constraint satisfaction devices."
> --David Etherington
>
> I am confused by this last sentence. Isn't constraint satisfaction
> a kind of inference? deKleer's ATMS and McAllester's RUP handle
> large portions (maybe all?) of propositional logic.
>
> --Tom Dietterich
If one views constraint satisfaction as incremental model elimination,
then there is a kind of inference going on, e.g., the number of models
for p(X) & q(X) is reduced by adding the new constraint r(X), to get
p(X) & q(X) & r(X). One can further see constraint satisfaction as
inference by looking at Prolog puzzle solutions, where a list of
constraints is posed as a goal, and the resolution prover must find
a satisfying substitution; there is search involved, but satisfying
substitutions are consequences of the axioms. Perhaps the best
intuition about ``truth maintenance''-like systems is that they provide
what is necessary for efficiently locating derivation steps that relied
on assumptions. It's probably natural that any actual implementation
blurs the distinction between the derivation maintenance and retrieval
subsystem, and the prover that actually applies the inference rules to
build derivations.
Randy Goebel
------------------------------
Date: 12 Nov 86 15:43:00 GMT
From: husc6!necntc!mirror!gabriel!inmet!sebes@eddie.mit.edu
Subject: Re: Canonical list of sentient computer
In the "sentient computer" class, there is Frank
Herbert's ←Destination Void←, which I recall as
being notable not only for being a pretty good
novel, but also for not appearing at all ridiculous
or dated even 20-30 years after writing. In fact,
some of the ideas mentioned are more in style now than
then, or even a few years ago.
--John Sebes
------------------------------
Date: 12 Nov 86 16:03:00 GMT
From: husc6!necntc!mirror!gabriel!inmet!sebes@eddie.mit.edu
Subject: Re: choosing grad schools
Stanford has a program along the lines of that described
at UCSD. The participating departments are CS, linguistics,
philosophy, and psychology. There is a list of courses
offered in those departments that count toward a course
requirement for a phd in 'X and Cognitive Science' (I am
not sure that that is the wording, but it is the gist).
In addition to whatever course work you need to do in
your department, you must take some number of those
approved courses, with a certain distribution between
your dept and the other three. Depending on your dept
and how much course work you need to do there, it could
be quite an undertaking. Also, it is a relatively recent
thing, and I not sure how many people are actually
involved in it.
I found out about it simply by calling one of the depts
and asking if that had any cogsci organization.
Stanford also has a well-funded research center, the
Center for the Study of Language and Intelligence
(or something similar that spells CSLI ("Cicely")).
--John Sebes
------------------------------
Date: Sun, 16 Nov 86 23:22:58 PST
From: talmy%cogsci.Berkeley.EDU@berkeley.edu (Len Talmy)
Subject: Cognitive Science degree programs at UC Berkeley
In response to Don Norman's call for information, no, UC Berkeley does
not have any degree-granting program in Cognitive Science either at the
undergraduate or at the graduate level. So far, the most a student has
been able to do is to make use of the special institutional apparatus
for setting up a personally tailored degree program. However, we are
now actively working on setting up a degree program at the undergraduate
level. Even such a modest goal should take from one to two years,
after all the committees have been formed and have analyzed the proposal.
It was felt that a graduate degree program ought to be established only
after an undergraduate one was in place and after some demand for
Cognitive Science Ph.D.'s had developed. But the "Doctorate in X and
Cognitive Science" formula is an interesting intermediate possibility,
and we'll look into it.
Len Talmy (coordinator, cognitive science program)
talmy@cogsci.berkeley.edu
------------------------------
Date: Tue, 18 Nov 86 12:19:58 est
From: "B. Lindsay Patten" <shen5%watdcsu.waterloo.edu@RELAY.CS.NET>
Reply-to: "B. Lindsay Patten"
<shen5%watdcsu.waterloo.edu@RELAY.CS.NET>
Subject: Re: AI and the Arms Race
In article <LIN.12253338541.BABYL@XX.LCS.MIT.EDU> LIN@XX.LCS.MIT.EDU writes:
>[I posted a message from AILIST on ARMS-D, and got back this reply.]
>From: ihnp4!utzoo!henry at ucbvax.Berkeley.EDU
>Re: Professionals and Social Responsibility for the Arms Race
[some valid objections to arguments made by Dr. Weizenbaum on problems with AI]
>> 8) every researcher should assess the possible end use of
>> their own research, and if they are not morally comfortable
>> with this end use, they should stop their research...
↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
>> He specifically referred to research in machine vision, which he
>> felt would be used directly and immediately by the military for
>> improving their killing machines...
>
>I'm afraid this is muddy thinking again. *All* technology has military
>applications.
[examples of good things that came out of military research]
>It's hard
>to conceive of a field of research which doesn't have some kind of military
>application.
>
> Henry Spencer @ U of Toronto Zoology
> {allegra,ihnp4,decvax,pyramid}!utzoo!henry
This is by far the most common objection I've heard since Dr. Weizenbaum's
lecture and one which I think avoids the point. Read the first three lines
of point 8 above. The real point Dr. Weizenbaum was trying to make (in my
opinion) was that we should weigh the good and bad applications of our work
and decide which outweighs the other. The examples that he gave were just
areas in which he personally believed the bad applications outweighed the
good. He was very explicit that he was just presenting HIS personal opinions
on the merits of these applications. Basically he said that if you feel
your work will do more harm than good you should find another area to work in.
My objection to his talk is that he seemed to want to weigh entire applications
against one another. It seems to me that we should be examining the relative
impact of our research in the applications which we approve of and in those we
object to.
Lindsay Patten
|Cognitive Engineering Group (519) 746-1299|
|Pattern Analysis and Machine Intelligence Lab lindsay@watsup|
|University of Waterloo {decvax|ihnp4}!watmath!watvlsi!watsup!lindsay|
------------------------------
End of AIList Digest
********************
∂20-Nov-86 0143 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #262
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 20 Nov 86 01:42:59 PST
Date: Wed 19 Nov 1986 21:22-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #262
To: AIList@SRI-STRIPE.ARPA
AIList Digest Thursday, 20 Nov 1986 Volume 4 : Issue 262
Today's Topics:
Conferences - NCAI Exhibit Program &
ACM Principles of Database Systems Exhibit Program &
AI Papers in Upcoming Simulation Conferences,
Journals - IEEE Expert Call For Financial Expert Systems &
BBS Call For Commentators in Vision Modeling
----------------------------------------------------------------------
Date: Fri 14 Nov 86 08:42:25-PST
From: AAAI <AAAI-OFFICE@SUMEX-AIM.ARPA>
Subject: Special Invitation to Universities and Research Institutes
The AAAI would like to extend a special invitation to academic
institutions and non-profit research laboratories to participate in
the Exhibit Program at the Sixth National Conference on Artificial
Intelligence, July 14-16, 1987 in Seattle, Washington. It is
important to communicate what universities and laboratories are doing
and demonstrate your research efforts at the conference.
Last year we initiated this new addition was considered one
of the highlights of the 1986 conference.
AAAI will provide each institution with one 10'x10' booth free, room
to describe your demonstration in the Exhibit Guide, and assist with
your logistical arrangements. Some direct costs are involved which
the AAAI cannot provide assistance with. Those costs include shipping
equipment to the site, telephone lines (communication (required) or
computer), housing, and others. We can direct interested groups to
vendors who may be able to assist with equipment needs. Last year,
many hardware vendors donated equipment for the university demonstrations
and will continue with this practice next year.
We hope you can join us in Seattle and help disseminate the latest
research results to our conference attendees.
If you or your department are interested in participating, please
contact:
Steven Taglio
AAAI
445 Burgess Drive
Menlo Park, CA 94025
(415) 328-3123
AAAI-Office@sumex-aim.arpa
------------------------------
Date: Thu, 13 Nov 86 22:39:02 PST
From: Moshe Vardi <vardi@navajo.stanford.edu>
Subject: ACM Symp. on Principles of Database Systems
THE SIXTH ACM SYMPOSIUM ON PRINCIPLES OF DATABASE SYSTEMS
Call for Exhibits
The Sixth ACM Symposium on Principles of Database Systems will
take place between March 23 and March 25, 1987, at the Bahia
Resort Hotel in San Diego. The symposium will cover new develop-
ments in both theoretical and practical aspects of database and
knowledge-based systems. Previous symposia have been attended by
researchers from both industry and academia. For the first time,
this year the symposium will include exhibits of state-of-the-art
products from industry. If you have a product you would like to
exhibit, please send a brief description by December 15, 1986,
to:
Victor Vianu
Local Arrangements Chairman, PODS '87
EECS Department, MC-014
Univ. of California at San Diego
La Jolla, California 92093
(619) 534-6227
vianu@sdcsvax.ucsd.edu
Since space is limited, exhibits will be selected based on the
proposals received. Your contribution would be greatly appreciat-
ed.
------------------------------
Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: AI at upcoming conferences (simulation)
1987 Society for Computer Simulation Multiconference 1987
Modeling and Simulation on Microcomputers
Individual Face Classification by Computer Vision
Robert A. Campbell, Scott Cannon, Greg Jones, Neil Morgan, Utah State
University
AI and Simulation
Preliminary Screening of Wastewater Treatment Alternatives Using Personal
Consultant Plus
Giles G. Patry, Bruce Gall, McMaster University
The impact of embedding AI tools in a control system simulator
Norman R. Nielson SRI International
An Expert System for the Controller
James A. Sena, L. Murphy Smith,Texas A&M University
Application of Artificial Intelligence Techniques to Simulation
Pauline A. Langen, Carrier Corporation
The Expert System Applicability Question
Louis R. Gieszi
An Intelligent Interface for Continuous System Simulation
Wanda M. ustin The Aerospace Corporation Behrokh Khoshnevis University of
Southern California
Logic Progrmming and Discrete Event Simulation
Robert G. Sargent, Ashvin, Radiya, Syracuse University
Expet Systems for Interactive Simulation of Computer System Dynamics
Axel Lehmann, University of Karlsruhe
An Automated Simulation Modeling System Based on AI Techniques
Behrokh Khoshnevis, An-Pin Chen, University of Southern California
Design of a Flexible Extendible Modeling Environment
Robert J. Pooley University of Edinburgh
Prolog for Simulation
Expert System Shell with System Simulation Capabilities
Ivan Futo, Computer Research Institute
Languages for Distributed Simulation
Brian Unger, Xining Li, University of Calgary
Process Oriented Simulation in Prolog
Jeans Vaucher, University of Montreal
Application of Artificial Intelligence Techniques to Simulation
Pauline A. Langen, Carrier Corporation
Computer Integrated Manufacturing Systems and robotics
A Data Modeling Approach to Improve System's Intelligence in Automated
Manufacturing
Lee-Eng Shirley Lin, Yun-Baw Lin, Tamkang University
KARMA - A Knowledge-Based Robot Manipulation Graphics Simulation
Richard H. Kirschbrown, Consultant
Development of questions-answers simulator for real-time scheduling and control
in flexible manufacturing system using Prolog
Lee-Eng Shirley Lin, Tamkang Unviersity, Chang Yung Lui, National Sun Yat-Sen
University
Simulation of uncertainty and product structure in MRP
Louis Brennan, Surendra Mohan Gupta, Northeastern University
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
The University of ARizona Fourth Symposium on Modeling and Simulation
Methodology
January 19-23 1987
AI and Simulation I, R. V. Reddy
AI and Simulation II, B. P. Zeigler
(Object Oreinted/AI Programming, Combining Discrete Event and Symbolic Models,
Hierarchical, Modular Modelling/Multiprocessor Simulation)
AI and Simulation III, T. I. Oren
cognizant Simulation Systems, AI and Quality Assurance Methodology
AI and Simulation IV
Environments for AI and Simulation, Interfacing Lisp Machines and Simulation
Engines
Special Sessions on Model-basedDiagnosis and Expert Systems Training, Inductive
Modelling,
Goal Directed, Variable-Structure Models, AI and Simulation in Education
------------------------------
Date: 12 November 1986, 09:48:28 EST
From: "Chidanand V. Apte" <APTE@ibm.com>
Subject: Call for Papers - Financial Expert Systems (IEEE Expert)
CALL FOR PAPERS
---------------
IEEE EXPERT
Special Issue - Fall 1987
AI Applications in Financial Expert Systems
The Fall 1987 issue of IEEE EXPERT will be devoted to papers that
discuss the technical requirements imposed upon AI techniques for
building intelligent systems for financial applications and the
methodologies employed for the construction of such systems.
Requirements for submission of papers
-------------------------------------
Authors should submit their papers to the guest editors no later than
APRIL 1, 1987. Each submission should include one cover page and five
copies of the complete manuscript. The one cover page should include
Name(s), affiliation(s), complete address(es), identification of
principal author and telephone number. The five copies of the complete
manuscript should each include: Title and abstract page: title of paper,
100 word abstract indicating significance of contribution, and The
complete text of the paper in English, including illustrations and
references, not exceeding 5000 words.
Topics of interest
------------------
Authors are invited to submit papers describing recent and novel
applications of AI techniques in the research and development of
financial expert systems. Topics (in the context of the domain) include,
but are not limited to: Automated Reasoning, Knowledge Representations,
Inference Techniques, Problem Solving Control Mechanisms, Natural
Language Front Ends, User Modeling, Explanation Methodologies, Knowledge
Base Debugging, Validation, and Maintenance, and System Issues in
Development and Deployment.
Guest Editors
--------------
Chidanand Apte (914-945-1024, Arpa: apte@ibm.com)
John Kastner (914-945-3821, Arpa: kastner@ibm.com)
IBM Thomas J. Watson Research Center
P.O. Box 218
Yorktown Heights, New York 10598
========
------------------------------
Date: Wed, 19 Nov 86 13:26:38 est
From: princeton!mind!harnad@seismo.CSS.GOV (Stevan Harnad)
Subject: Modeling vision: A call for commentators.
Keywords: connectionism, neural modeling, vision, robotics, neuroethology
This is an experiment in using the Net to find eligible commentators
for articles in the Behavioral and Brain Sciences (BBS), an
international, interdisciplinary journal of "open peer commentary,"
published by Cambridge University Press, with its editorial office in
Princeton NJ.
The journal publishes important and controversial interdisciplinary
articles in psychology, neuroscience, behavioral biology, cognitive science,
artificial intelligence, linguistics and philosophy. Articles are
rigorously refereed and, if accepted, are circulated to a large number
of potential commentators around the world in the various specialties
on which the article impinges. Their 1000-word commentaries are the
co-published with the target article as well as the author's response
to each. The commentaries consist of analyses, elaborations,
complementary and supplementary data and theory, criticisms and
cross-specialty syntheses.
Commentators are selected by the following means: (1) BBS maintains a
computerized file of over 3000 BBS Associates; the size of this group
is increased annually as authors, referees, commentators and nominees
of current Associates become eligible to become Associates. Many
commentators are selected from this list. (2) The BBS editorial office
does informal as well as formal computerized literature searches on
the topic of the target articles to find additional potential commentators
from across specialties and around the world who are not yet BBS Associates.
(3) The referees recommend potential commentators. (4) The author recommends
potential commentators.
We now propose to add the following source for selecting potential
commentators: The abstract of the target article will be posted in the
relevant newsgroups on the net. Eligible individuals who judge that they
would have a relevant commentary to contribute should contact me at the
e-mail address indicated at the bottom of this message, or should
write by normal mail to:
Stevan Harnad
Editor
Behavioral and Brain Sciences
20 Nassau Street, Room 240
Princeton NJ 08542
"Eligibility" usually means being an academically trained professional
contributor to one of the disciplines mentioned earlier, or to related
academic disciplines. The letter should indicate the candidate's
general qualifications as well as their basis for wishing to serve as
commentator for the particular target article in question. It is
preferable also to enclose a Curriculum Vitae. (This self-nomination
format may also be used by those who wish to become BBS Associates,
but they must also specify a current Associate who knows their work
andis prepared to nominate them; where no current Associate is known
by the candidate, the editorial office will send the Vita to
approporiate Associates to ask whether they would be prepared to
nominate the candidate.)
BBS has rapidly become a highly read read and very influential forum in the
biobehavioral and cognitive sciences. A recent recalculation of BBS's
"impact factor" (ratio of citations to number of articles) in the
American Psychologist [41(3) 1986] reports that already in its fifth
year of publication BBS's impact factor had risen to become the highest of
all psychology journals indexed as well as 3rd highest of all 1300 journals
indexed in the Social Sciences Citation Index and 50th of all 3900 journals
indexed in the Science Citation index, which indexes all the scientific
disciplines.
The following is the abstract of the second forthcoming article on
which BBS invites self-nominations by potential commentators. (Please
note that the editorial office must exercise selectivity among the
nominations received so as to ensure a strong and balanced cross-specialty
spectrum of eligible commentators.)
-----
NEUROETHOLOGY OF RELEASING MECHANISMS: PREY-CATCHING IN TOADS
Joerg-Peter Ewert
Neuroethology Department, FB 19,
University of Kassel
D-3500 Kassel
Federal Republic of Germany
ABSTRACT:
"Sign stimuli" elicit specific patterns of behavior when an
organism's motivation is appropriate. In the toad, visually released
prey-catching involves orienting toward the prey, approaching,
fixating and snapping. For these action patterns to be selected and
released, the prey must be recognized and localized in space. Toads
discriminate prey from nonprey by certain spatiotemporal stimulus
features. The stimulus-response relations are mediated by innate
releasing mechanims (RMS) with recognition properties partly
modifiable by experience. Striato-pretecto-tectal connectivity
determines the RM's recognition and localization properties whereas
medialpallio-thlamo-tectal cicuitry makes the system sensitive to
changes in internal state and to prior history of exposure to stimuli.
RMs encode the diverse stimulus conditions involving the same prey
object through different combinations of "specialized" tectal neurons,
involving cells selectively tuned to prey features. The prey-selective
neurons express the outcome of information processing in functional
units consisting of interconnected cells. Excitatory and inhibitory
interactions among feature-sensitive tectal and pretectal neurons
specify the perceptual operations involved in distinguishing prey
from its background, selecting its features, and discriminating it
from predators. Other connections indicate stimulus location. The
results of these analyses are transmitted by specialized neurons
projecting from the tectum to bulbar/spinal motor systems, providing a
sensorimotor interface. Specific combinations of projective neurons --
mdiating feature- and space-related messages -- form "command
releasing systems" that activate corresponding motor pattern
generators from appropriate prey-catching action patterns.
-----
Potential commentators should send their names, addresses, a description of
their general qualifications and their basis for seeking to comment on
this target article in particular to the address indicated earlier or
to the following e-mail address:
Stevan Harnad
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
(609)-921-7771
------------------------------
End of AIList Digest
********************
∂20-Nov-86 0346 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #263
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 20 Nov 86 03:46:18 PST
Date: Wed 19 Nov 1986 21:37-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #263
To: AIList@SRI-STRIPE.ARPA
AIList Digest Thursday, 20 Nov 1986 Volume 4 : Issue 263
Today's Topics:
Philosophy - D/A Distinction and Symbols &
Machine Intelligence/Consciousness &
Philosophy of Mind Stuff
----------------------------------------------------------------------
Date: 12 Nov 86 18:01:31 GMT
From: trwrb!aero!marken@ucbvax.Berkeley.EDU (Richard Marken)
Subject: D/A Distinction and Symbols
In article <3490001@hpfcph.HP.COM> Bob Myers makes an eloquent debut to the
D/A distinction debate with the following remarks:
>The difference between "analog" and "digital" is nothing more than the
>difference between a table of numbers and the corresponding graph; in a
>digital representation, we assign a finite-precision number to indicate the
>value of something (usually a signal) at various points in time (or frequency,
>or space, or whatever). An "analog" representation is just that - we choose
>to view some value (voltage, current, water pressure, anything) as hopefully
>being a faithful copy of something else. An excellent example is a
>microphone, which converts a varying pressure into an "analogous" signal -
>a varying voltage. This distinction has nothing to do with the accuracy of
>the representation obtained, the technology used to obtain, or any of a host
>of other items that come to mind when we think of the terms "analog" and
>"digital".
We haven't heard for some time from the usually prolific Dr. Har-
nad, who started the debate with a request for definitions of the
A/D distinction. It seems to me that the topic was broached in
the first place because Harnad had some notion that "analog" or
"non-symbolic" robots are, in some way, a better subject for a test
of machine intelligence (a la Turing) than the"symbol manipula-
tor" envisioned by Turing himself.
Whether this was where Harnad was going or not, I would like to
make one point. It seems to me, based on the cogent A/D distinc-
tion made by Myers, that both analog and digital representations
are "symbolic". In both cases, some variable (number, signal lev-
el) is used to represent another. The relationship between the
variables is ←arbitrary← in, potentially, two ways: 1) the
nature of the analog signal or number used to represent the
other variable is arbitrary- other types of signals
or other number values could have also been used. Using electri-
city to represent sound pressure level is arbitrary
(though, possibly, a good enginnering decision)-- sound pressure
level could have been represented by height of a needle (hey,
it is) or by water pressure or whatever.
2) the values of the analog (or digital) variable used to
represent the values of another variabl are, in principle, also arbi-
trary. Randomly different voltages could be used to represent
different sound pressure levels. This would be difficult (and
possibly ridiculous) to try to implement but it could be done
(like where changes over time in the variable being
represented are very slow).
Maybe the best way to put this is as follows:
in digital or analog representation we have some variable, y,that
represents some other, x, so that y= f(x). Regardless of the analog
or digital characteristics of x and y, y "symbolizes" x because
1) another variable, y', could be used to represent x (so y is arbitrary)
and 2) y could be defined by a different function, f', so f is arbitrary.
I think 1) and 2) capture what is meant when it is said that
symbols are arbitrary representations of events. Symbols are not
completely arbitrary. Once y and f are selected you've got to
stick with them (in the context of your application) or the
symbol system is useless. Thus, the sounds that we use to
represent events (f) and the fact that we use sounds (y) is an
arbitrary propery of our language symbol system. But now that
we've settled on it we've got to stick with it for it to be
useful (for communication). We could (like humpty-dumpty) keep
changing the relationship between words and events but this kind
of arbitrariness would make communication impossible.
Conclusion: I don't believe that the A/D distinction is a distinction
between non-symbol vs symbol systems. If there is a difference between
robots (that deal with "real world" variables) and turing machines
(that deal with artificial symbol systems) I don't believe it can turn
on the fact that one deals with symbols and the other doesn't. They
both deal with symbols. So what is the difference? I think there
is a difference between robots (of certain types) and turing machines--
and a profound one at that. But that's another posting.
--
Disclaimer-- The opinions expressed are my own. My employer, mother
wife and teachers should not be held responsible -- though some tried
valiantly to help.
Richard Marken Aerospace Corp.
(213) 336-6214 Systems Simulation and Analysis
P.O. Box 92957
M1/076
Los Angeles, CA 90009-2957
marken@aero.ARPA
kk
------------------------------
Date: Fri, 7 Nov 86 15:31:31 +0100
From: mcvax!ukc!rjf@seismo.CSS.GOV
Subject: Re: machine intelligence/consciousness
There has been some interesting discussion (a little while back now)
on the possibility of 'truly' intelligent machines; in particular
the name of Nagel has been mentioned, and his paper 'What is it like to be
a bat?'.
This paper is not, however, strictly relevant to a discussion of machine
intelligence, because what Nagel is concerned with is not intelligence, but
consciousness. That these are not the same, may be realised on a little
contemplation. One may be most intensely conscious while doing little or no
cogitation. To be intelligent - or, rather, to use intelligence - it seems
necessary to be conscious, but the converse does not hold - that to be
conscious it is necessary to be intelligent. I would suggest that the former
relationship is not an necessary one either - it just so happens that we are
both conscious and (usually) intelligent.
Animals probably are conscious without being intelligent. Machines may
perhaps be intelligent without being conscious. If these are defined
seperately, the problem of the intelligent machine becomes relatively trivial
(though that may seem too good to be true): an intelligent machine is capable
of doing that which would require intelligence in a person, eg high level
chess. On the other hand, it becomes obvious that what really exercises the
philosophers and would-be philosophers (I include myself) is machine
consciousness. As for that:
Another article in the same collection by Nagel (Mortal Questions, 1978)
takes his ideas on consciousness somewhat further. A summary of the
arguments developed in 'Subjective and Objective' could not possibly do them
justice (anyone interested is heartily recommended to obtain a copy), so only
the conclusions will be mentioned here. Briefly, Nagel views subjectivity as
irreducible to objectivity, indeed the latter derives from the former, being
a corrected and generalised version of it. A maximally objective view of the
world must admit the reality of subjectivity, in the minimal sense that
individuals do hold differing views, and there is no better - or worse -
judge of which view is more truly objective, than another individual.
This view does not to any extent denigrate the practicality of objective
methods (the hypothesis of objective reality is proven by the success of the
scientific method), but nor is it possible to deny the necessity of
subjectivity in some situations, notably those directly involving other
people. It is surely safe to say that no new objective method will ever
substitute for human relationships. And the reason that subjectivity works
in this context is because of what Nagel terms 'intersubjectivity' -
individuals identifying with each other - using their imaginations creatively
and for the most part accurately to put themselves in another person's shoes.
So what, really, is consciousness? According to Nagel, a thing is conscious
if and only if it is like something to be that thing. In other words, when
it may be the subject (not the object!) of intersubjectivity. This accords
with Minsky (via Col. Sicherman): 'consciousness is an illusion to itself but
a genuine and observable phenomenon to an outside observer...' Consciousness
is not self-consciousness, not consiousness of being conscious, as some have
thought, but is that with which others can identify. This opens the way to
self-awareness through a hall of mirrors effect - I identify with you
identifying with me... And in the negative mode - I am self-conscious when I
feel that someone is watching me.
It may perhaps be supposed that the concept of consciousness evolved as part
of a social adaptation - that those individuals who were more socially
integrated, were so at least in part because they identified more readily,
more intelligently and more imaginatively with others, and that this was a
successful strategy for survival. To identify with others would thus be an
innate behavioural trait.
So consciousness is at a high level (the top?) in software, and is, moreover,
not supported by a single unit of hardware, but by a social network. In its
development, at least. I, or anyone else, might suppose that I am still
conscious when alone, but not without (the supposer, whether myself or
another) having become conscious in a social context. When I suppose myself
to be conscious, I am imagining myself outside myself - taking the point of
view of an (hypothetical) other person. An individual - man or machine -
which has never communicated through intersubjectivity might, in a sense, be
conscious, but neither the individual nor anyone else could ever know it. A
community of machines sufficiently sophisticated that they identify with each
other in the same way as we do, may some day develop, but how could we decide
whether they were really conscious or not? They might know it, but we never
could - and that is neither pessimism not prejudice, but a matter of
principle.
Subjectively, we all know that consciousness is real. Objectively, we have
no reason to believe in it. Because of the relationship between subjectivity
and objectivity, that position can never be improved on. Pragmatism demands
a compromise between the two extremes, and that is what we already do, every
day, the proportion of each component varying from one context to another.
But the high-flown theoretical issue of whether a machine can ever be
conscious allows no mere pragmatism. All we can say is that we do not know,
and, if we follow Nagel, that we cannot know - because the question is
meaningless.
(Technically, the concept of two different but equally valid ways of seeing,
in this case subjectively and objectively, is a double aspect theory; the
dichotomy lies not in the nature of reality, but in our perception. Previous
double aspect theories, interestingly consistent with this one, have been
propounded by Spinoza - regarding our perception of our place within nature -
and Strawson - on the concept of a person. I do not have the full references
to hand.)
Any useful concepts among those foregoing probably derive from Nagel, any
misleading ones from myself; none from my employers.
Rob Faichney
------------------------------
Date: 18 Nov 86 08:30:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: philosophy of mind stuff (get it?)
Can't resist a few more go-rounds with S. Harnad. Lest the size of these
messages increase exponentially, I'll try to avoid re-hashing old
issues and responding to side-issues...
> Harnad:
> I agree that scientific inference is grounded in observed correlations.
> But the primary correlation in this special case is, I am arguing, between
> mental states and performance. That's what both our inferences and our
> intuitions are grounded in. The brain correlate is an additional cue, but only
> inasmuch as it agrees with performance.
> ...in ambiguous
> cases, behavior was and is the only rational arbiter. Consider, for
> example, which way you'd go if (1) an alien body persisted in behaving like a
> clock-like automaton in every respect -- no affect, no social interaction,
> just rote repetition -- but it DID have something that was indistinguishable
> (on the minute and superficial information we have) from a biological-like
> nervous system), versus (2) if a life-long close friend of yours had
> to undergo his first operation, and when they opened him up, he turned
> out to be all transistors on the inside. I don't set much store by
> this hypothetical sci-fi stuff, especially because it's not clear
> whether the "possibilities" we are contemplating are indeed possible. But
> the exercise does remind us that, after all, performance capacity is
> our primary criterion, both logically and intuitively, and its
> black-box correlates have whatever predictive power they may have
> only as a secondary, derivative matter. They depend for their
> validation on the behavioral criterion, and in cases of conflict,
> behavior continues to be the final arbiter.
I think I may have been tactitly conceding the point above, which I
now wish to un-concede. Roughly speaking, I think my (everyone's)
epistemological position is as follows: I know I have a mind. In
order to determine if X has a mind I've got to look for analogous
external things about X which I know are causally connected with mind
in *my own* case. I naively know (and *how* do I know this??) that large
parts of my performance are an effect of my mind. I scientifically
know that my mind depends on my brain. I can know this latter
correlation even *without* performance correlates, eg, when the dentist
puts me under, I can directly experience my own loss of mind which
results from loss of whatever brain activity. (I hope it goes
without saying that all this knowledge is just regular old
reliable knowledge, but not necessarily certain - ie I am not
trying to respond to radical skepticism about our everyday and
scientific knowledge, the invocation of deceptive dentists, etc.)
I'll assume that "mind" means, roughly, "conscious intelligence".
Also, assume throughout of course that "brain" is short-hand for
"brain activity known (through usual neuro-science techniques) to be
necessary for consciousness".
Now then, armed with the reasonably reliable knowledge that in my own
case, my brain is a cause of my mind, and my mind is a cause of my
performance, I can try to draw appropriate conclusions about others.
Let's take 4 cases:
1. X1 has brains and performance - ie another normal human. Certainly
I have good reason to assume X1 has a mind (else why should similar
causes and effects be mediated by something so different from that
which mediates in my own case?)
2. X2 has neither brains nor performance - and no mind.
3. X3 has brains, but little/no performance - eg a case of severe
retardation. Well, there doesn't seem much reason to believe that
X has intelligence, and so is disqualified from having mind, given
our definition. However, it is still reasonable to believe that
X3 might have consciousness, eg can feel pain, see colors, etc.
4. X4 has normal human cognitive performance, but no brains, eg the
ultimate AI system. Well, no doubt X4 has intelligence, but the issue
is whether X4 has consciousness. This seems far from obvious to me,
since I know in my own case that brain causes consciousness causes
performance. But I already know, in the case of X4, that the causal
chain starts out at a different place (non-brain), even if it ends up
in the same place (intelligent performance). So I can certainly
question (rationally) whether it gets to performance "via
consciousness" or not.
If this seems too contentious, ask yourself: given a choice between
destroying X3 or X4, is it really obvious that the more moral choice
is to destroy X3?
Finally, a gedanken experiment (if ever there was one) - suppose
(a la sci-fi stories) they opened you up and showed you that you
really didn't have a brain after all, that you really did have
electronic circuits - and suppose it transpired that while most
humans had brains, a few, like yourself, had electronics. Now,
never doubting your own consciousness, if you *really* found that
out, would you not then (rationally) be a lot more inclined to
attribute consciousness to electronic entities (after all you know
what it feels like to be one of them) than to brained entities (who
knows what, if anything, it feels like to be one of them?)?
Even given *no* difference in performance between the two sub-types?
Showing that "similarity to one's own internal make-up" is always
going to be a valid criterion for consciousness, independent of
performance.
I make this latter point to show that I am a brain-chauvinist *only
insofar* as I know/believe that I *myself* am a brained entity (and
that my brain is what causes my consciousness). This really
doesn't depend on my own observation of my own performance at all -
I'd still know I had a mind even if I never did any (external) thing
clever.
To summarize: brainedness is a criterion, not only via the indirect
path of: others who have intelligent performance also have brains,
ergo brains are a secondary correlate for mind; but also via the
much more direct path (which *also* justifies performance as a
criterion): I have a mind and in my very own case, my mind is
closely causally connected with brains (and with performance).
> As to CAUSATION -- well, I'm
> sceptical that anyone will ever provide a completely satisfying account
> of the objective causes of subjective effects. Remember that, except for
> the special case of the mind, all other scientific inferences have
> only had to account for objective/objective correlations (and [or,
> more aptly, via) their subjective/subjective experiential counterparts).
> The case under discussion is the first (and I think only) case of
> objective/subjective correlation and causation. Hence all prior bets,
> generalizations or analogies are off or moot.
I agree that there are some additional epistemological problems, compared
to the usual cases of causation. But these don't seem all that daunting,
absent radical skepticism. We already know which parts of the brain
correlate with visual experience, auditory experience, speech competence,
etc. I hardly wish to understate the difficulty of getting a full
understanding, but I can't see any problem in principle with finding
out as much as we want. What may be mysterious is that at some level,
some constellation of nerve firings may "just" cause visual experience,
(even as electric currents "just" generate magnetic fields.) But we are
always faced with brute-force correlation at the end of any scientific
explanation, so this cannot count against brain-explanatory theory of mind.
> Perhaps I should repeat that I take the context for this discussion to
> be science rather than science fiction, exobiology or futurology. The problem
> we are presumably concerned with is that of providing an explanatory
> model of the mind along the lines of, say, physics's explanatory model
> of the universe. Where we will need "cues" and "correlates" is in
> determining whether the devices we build have succeeded in capturing
> the relevant functional properties of minds. Here the (ill-understood)
> properties of brains will, I suggest, be useless "correlates." (In
> fact, I conjecture that theoretical neuroscience will be led by, rather
> than itself leading, theoretical "mind-science" [= cognitive
> science?].) In sci-fi contexts, where we are guessing about aliens'
> minds or those of comatose creatures, having a blob of grey matter in
> the right place may indeed be predictive, but in the cog-sci lab it is
> not.
Well, I plead guilty to diverting the discussion into philosophy, and as
a practical matter, one's attitude in this dispute will hardly affect
one's day-to-day work in the AI lab. One of my purposes is a kind of
pre-emptive strike against a too-grandiose interpretation of the
results of AI work, particularly with regard to claims about
consciousness. Given a behavioral definition of intelligence, there
seems no reason why a machine can't be intelligent. But if "mind"
implies consciousness, it's a different ball-game, when claiming
that the machine "has a mind".
My as-yet-unarticulated intuition is that, at least for people, the
grounding-of-symbols problem, to which you are acutely and laudably
sensitive, inherently involves consciousness, ie at least for us,
meaning requires consciousness. And so the problem of shoehorning
"meaning" into a dumb machine at least raises the issue about how
this can be done without making them conscious (or, alternatively,
how to go ahead and make them conscious). Hence my interest in your
program of research.
John Cugini <Cugini@NBS-VMS>
------------------------------
End of AIList Digest
********************
∂20-Nov-86 0527 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #264
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 20 Nov 86 05:27:12 PST
Date: Wed 19 Nov 1986 21:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #264
To: AIList@SRI-STRIPE.ARPA
AIList Digest Thursday, 20 Nov 1986 Volume 4 : Issue 264
Today's Topics:
Reviews - Spang Robinson Report & Recent Press Releases,
Open House - Invitation to USC AI & VLSI Demo,
Seminar - Trajectory Planning in Time-Varying Environments (MIT) &
An Expert System for Building Layout (CMU) &
New Paths in Knowledge Engineering (BBN)
----------------------------------------------------------------------
Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: Spang Robinson Report Summary
Spang Robinson Report, November 1986 Vol 2, No 11
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Discussion of AI Applications to Manufacturing
Carnegie Group has 90 percent of their toolkit
sales and 100- per cent of their custom contracts
from manufacturing clients. A spin off from Composition Systems is
selling tools to manufacturing customers. (Composition Systems
sells an expert system for newspaper layout [LEFF])
The Society of Manufacturing Engineers has formed an AI in Manufacturing
Advisory Committee (contact Michael Tew 313 271-1500).
Allen Bradley will have "knowledge base technologies" in their line
of factory controllers.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Software Review of Knowledge Craft
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
California Intelligence is selling products that add frame and blackboard
facilities to EXSYS, FRAME and TABLET respectively.
They can also be used with other AI tools and even to mix AI tools
with other applications. TABLET also allows the addition of variables
to expert systems that do not have them. FRAME will call an outside
program when the value of the slot is needed.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Other Material:
Nihol Life, Japan, is developing an expert system that will assess the
insurability of people with various medical conditions.
Boeing has a Knowledge-Based System Center in Japan that provides info
to various companies operating in Japanese, both American and Japan
Fujitsu is including an expert system to help choose algorithms
for image processing with its general purpose image processing system.
Fujitsu will be selling an AI tool for its 68010 based engineering work
station.
Intellicorp will be entering the Japanese market on its own when its
contract with CSK expires this November.
SRI Cambridge will be organizing a multi-million dollar natural
language effort in England.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
BOOKs Reviewed
Portraits of Success: Impressions of Silicon Valley by Carolyn Caddes
On Machine Intelligence by Donald Michie
------------------------------
Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: summary of recent press releases
From the report on the IEEE Annual Briefing for the Media
James A. Sprowl of the Illinois Institute of Technology is developing
an auotmated client interviewing and legal document assembly system
which automated wills, contracts, pleadings and others. It is designed
to assist nonspecialized attorneys.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Robert L. Degenhart AT&T Bell Labs, 201 - 564-4091
Bell Labs has developed an IC chip containing 256 electronic neurons.
It contains 25,000 transistors, 100,000 resistors on 1/4 square inch of
silicon. Retrieval speed is 400 nanoseconds and anticipate their use
in image processors. Neural networks permit greater chip density
and require fewer layers of lithography. They have been able to
fabricate chips with one tenth of a micron features.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
From Lisp Machine Inc
They have marketing TI's Explorer along with PICON a real-time expert
system application package and IKE, a consultation style expert system.
PICON achieves 200 rule frames/seconds in 2000 rule systems. They
project 1000 rule frames/second in 10,000 rule systems by the end of 1987.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
From Knowledge Engineering, 274 West 12th Street, PO Box 366, Village
Station, New York, New YOrk 10014-0366
They are marketing a review of AI market resources for $47.50.
They also publish a Knowledge Engineering Newsletter for $275.00 a year.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
From Phillip G. Ryan Public Relations
Release arguing that AI provides a career opportunity for MIS Managers.
Provides a rating form for a person's company to see how it stands
competitively in applying AI to their needs. This was publicity for
Software People Concepts Inc. and AI Services Company.
Also another one publicizing the same two companies saying that 40 percent of
the largest 500 companies are actively pursuing AI but that it's not MIS
people doing the work.
They are also publicizing Halbrecht Associates arguing that demand for
expert systems developers is high but that there is practically no
demand for "natural language, speech input/output, vision systems,
automatic theorem proving, automatic programming and super computing"
Companies are turning to traditional software engineers to do their
expert systems.
------------------------------
Date: Mon, 17 Nov 86 10:53 EST
From: TAKEFUJI%scarolina.csnet@RELAY.CS.NET
Subject: Invitation to USC AI & VLSI Demo
From: Dr. Yoshiyasu Takefuji
Date: Dec. 6, 1986
Time: 1 PM
Place: On the third floor at Engineering Building in Columbia,
South Carolina
Hello.
We will have project presentation/demonstration on the following subjects:
19 graduate students and 17 undergraduate students are involved
in these projects.
1. Fuzzy inference VLSI parallel-engine
2. Fuzzy rule translator and simulator
3. Expert system for determination of fuzzy inference engine
architecture
4. Paramodulation VLSI inference engine(pattern matcher)
5. Function Description Translator from behavior description
to VLSI layout level (CIF or Magic file)
6. Terminal-based local network project to eliminate RS232c wire-jungle
7. Case studies of knowledge acquisition
8. Graphic Applications
Let me know whether you can come to see our demo.
csnet: takefuji%scarolina.edu
usenet: ncrcae!usccmi!takefuji
Thank you.
------------------------------
Date: 18 Nov 1986 10:27 EST (Tue)
From: Claudia Smith <CLAUDIA%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Subject: Seminar - Trajectory Planning in Time-Varying Environments
(MIT)
TRAJECTORY PLANNING IN TIME VARYING ENVIRONMENTS
Kamal Kant Gupta
McGill University
Montreal,Canada
ABSTRACT:
We present a novel approach to solving the trajectory planning problem
(TPP) in time-varying environments. The essence of our approach lies
in a heuristic but natural decomposition of TPP into two subproblems:
(1) planning a path to avoid collision with static obstacles and (2)
planning the velocity along the path to avoid collision with moving
obstacles. We call the first subproblem the path planning problem
(PPP) and the second the velocity planning problem (VPP). Thus, our
decomposition is summarized by the equation TPP \rightarrow PPP + VPP.
The symbol \rightarrow indicates that the decomposition holds under
certain assumptions, e.g., when obstacles are moving independently of
(i.e. not tracking) the robot. Furthermore, we pose the VPP in
path-time space, where time is explicitly represented as an extra
dimension, and reduce it to a graph search in this space. In fact,
VPP is transformed to a two-dimensional PPP in path-time space with
some additional constraints. Algorithms are then presented to solve the
VPP with different optimality criteria: minimum length in path-time
space, and minimum time.
DATE: Tuesday, Nov. 18th
TIME: 3pm
PLACE: NE43-773 (7th floor conference room)
HOST: Prof. Brooks
------------------------------
Date: 18 Nov 86 22:28:22 EST
From: Steven.Minton@k.cs.cmu.edu
Subject: Seminar - An Expert System for Building Layout (CMU)
This week's seminar is being given by Robert Coyne and Tim Glavin.
As usual, Friday, 3:15 in 7220.
ABSTRACT:
We report on work in progress on a generative expert system for the design
of building layouts that can be adapted to various problem domains. The
system does not reproduce the behavior of human designers; rather, it
intends to complement their performance through (a) its ability to
systematically search for alternative solutions with promising trade-offs;
and (b) its ability to take a broad range of design concerns into account.
Work on the system also aims at providing insights into the applicability of
artificial intelligence techniques to space planning and building design in
general.
Spacial relations between the objects to be allocated serve as basic design
variables which define differences between layouts. They are represented by
a novel scheme, called an orthogonal structure, which allows us to enumerate
layouts in an abstract space, following a 'least commitment' strategy with
regard to details such as the dimensions of the objects. The
representation, and the generator based on it, are general and flexible
enough to allow generation of layouts in various 'domains'. The knowledge
required to distinguish good or 'best' layouts in particular domains is
located in special testers which are to be built up through the process of
'knowledge acquisition' as it typically occurs in the development of expert
systems.
------------------------------
Date: Wed, 19 Nov 86 22:31:02 EST
From: "Steven A. Swernofsky" <SASW@MX.LCS.MIT.EDU>
Subject: Seminar - New Paths in Knowledge Engineering (BBN)
The Science Development Program will present professor Donald Michie
of the Turing Institute of Scotland as the next speaker in the Guest
Lecture series. His lecture will take place on Thursday, November 20
at 4:30 p.m. in the Newman Auditorium, Bolt Beranek and Newman Inc.,
70 Fawcett Street, Cambridge, Ma. Dr. Michie will be lecturing on
the topic "New Paths in Knowledge Engineering."
Following is an abstract of his talk:
Artificial Intelligence is not something sudden. It has been on
the road for centuries. In intellectual terms the task is to
complement the mathematical universalism of physics with a logic
praticular to man. New approaches based on this shift of
philosophy are today breaking into the market place, driven by
certain pressing industrial and military requirements.
------------------------------
End of AIList Digest
********************
∂24-Nov-86 0236 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #265
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 24 Nov 86 02:36:29 PST
Date: Mon 24 Nov 1986 00:30-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #265
To: AIList@SRI-STRIPE.ARPA
AIList Digest Monday, 24 Nov 1986 Volume 4 : Issue 265
Today's Topics:
Queries - GLISP & PEARL & PD OPS5 and/or LISP,
AI Tools - KEE on Symbolics vs. Xerox,
Education - Cognitive Science Programs,
Ethics - AI and the Arms Race
----------------------------------------------------------------------
Date: Thu, 20 Nov 86 14:03:05 est
From: rochester!tropix!dls@seismo.CSS.GOV (David L. Snyder )
Reply-to: tropix!dls@seismo.CSS.GOV (David L. Snyder )
Subject: glisp info request
Someone asked me about glisp today, and all I could remember/say is that
I thought Gordon Novak had written it. Anyone out there care to refresh
my memory/enlighten me?
Thanks.
David Snyder
GCA/Tropel Division
60 O'Connor Road
Fairport, NY 14450
P.S. Try tropix!dls@rochester as an arpa address if other alternatives fail.
------------------------------
Date: 20 Nov 86 18:53:56 GMT
From: ritcv!tropix!dls@ROCHESTER.ARPA (David L. Snyder )
Subject: PEARL info request
A few questions about pearl (Package for Efficient Access to
Representations in Lisp):
Can anyone tell me what, if any, activity is going on with pearl these
days? (Is the pearl-bugs mailing list still active?) Has anyone used
it for non-toy problems? Any chance it'll be ported into common
lisp? Is there something better that superceeds it (and is in the
public domain)?
Thanks!
P.S. Try tropix!dls@rochester as an arpa address if other alternatives fail.
------------------------------
Date: 21 Nov 86 20:01:08 GMT
From: decvax!wanginst!sullivan@ucbvax.Berkeley.EDU (Brian Sullivan)
Subject: PD OPS5 and/or LISP ???
Sorry if this question has already been asked. I am a new subscriber to
this news group.
Does anyone know of a public domaim or low-cost ops5 for the IBM or
Wang PC?
Does anyone know of a public domaim or low-cost lisp for the IBM or
Wang PC?
Please reply to the address below, thanks in advance.
-------
Brian M. Sullivan sullivan@wanginst (Csnet)
Wang Institute of Graduate Studies decvax!wanginst!sullivan (UUCP)
Tyng Road, Tyngsboro, MA 01879 (617) 649-9731
------------------------------
Date: 19 Nov 86 08:49 EST
From: SHAFFER%SCOVCB.decnet@ge-crd.arpa
Subject: KEE on Symbolics vs. Xerox
We are working on projects using Intellicorp's KEE on a Symbolics
system. We had been running KEE 2.1 using Zetalisp 6.1,2,3.
Recently, we have received the updates for both products. The
new KEE 3.0 incorporates the "worlds" concept along with an
implementation of Asumption-based Truth Maintenance System. (ABTMS)
The lastest version of Zetalisp is Symbolics Common Lisp. (Genera 7.0)
Is anyone else out there is a similar environment?
We are interested in the following situations:
1) KEE 2.1 to 3.0 conversion problems
a) using "worlds"
b) using ABTMS
c) using KEEPictures
2) Genera 7.0 performance
a) FLAVORS
b) presentation types
3) KEE environments
a) KEE 2.1 on Zetalisp
b) KEE 3.0 on Zetalisp
c) KEE 2.1 on Genera 7.0
d) KEE 3.0 on Genera 7.0
I would like to comment on the Symbolics Vs Xerox debate.
It seems to me that the discussion should involve a real
life application that runs on both machines. For example,
KEE or ART. And since Xerox and Symbolics will both be
using Common Lisp, even the language will be similar.
Create a portable, interactive application using KEE, lets
say, and run it on both machines similarly equiped. Wouldnt
this be a better thing than long, long, stories about someone's
dated experiences on one of the two machines.
------------------------------
Date: 21 Nov 86 22:19:34 GMT
From: milano!conklin@im4u.utexas.edu
Subject: Re: choosing grad schools
When I was there three years ago U. Mass. (Amherst) had an
aggressively interdisciplinary approach to Cognitive Science, involving
the Computer Science (COINS), Linguistics, Psychology, and
Philosophy departments. While there was no single department
and no degree, there was active encouragement for students to
take courses in the other departements, and many advanced seminars
were co-lead by faculty of several departments. I don't know
the status of things now, especially since Michael Arbib, a chief
architect of that approach, has gone on to USC in LA.
--
Jeff Conklin
MCC Software Technology Program
(512) 338-3562
conklin@MCC.arpa ut-sally!im4u!milano!conklin
------------------------------
Date: Fri, 21 Nov 86 10:54:04 EST
From: "William J. Rapaport" <rapaport%buffalo.csnet@RELAY.CS.NET>
Subject: Cognitive Science at SUNY Buffalo
GRADUATE GROUP IN COGNITIVE SCIENCE
STATE UNIVERSITY OF NEW YORK AT BUFFALO
Buffalo, NY 14260
Gail A. Bruder William J. Rapaport
Department of Psychology Department of Computer Science
rapaport@buffalo.csnet
Co-Directors, 1986-1987
Cognitive Science is an interdisciplinary effort intended to investigate
the nature of the human mind. This effort requires the theoretical
approaches offered by computer science, linguistics, mathematics, philo-
sophy, psychology, and a host of other fields related by a mutual
interest in intelligent behavior.
The Graduate Group in Cognitive Science was formed to facilitate
cognitive science research at SUNY Buffalo. Its activities have focused
upon language-related issues and knowledge representation. These two
areas are important to the development of cognitive science and are well
represented at SUNY Buffalo by the research interests of faculty and
graduate students in the group.
Since its formal recognition in April 1981, the Graduate Group has
grown quickly. Currently, its membership of over 150 faculty and gradu-
ate students is drawn from the Departments of Computer Science; Psychol-
ogy; Linguistics; Communicative Disorders and Sciences; Philosophy;
Instruction; Communication; Counseling and Educational Psychology; Edu-
cational Organization, Administration, and Policy Studies; the Intensive
English Language Institute; Geography; and Industrial Engineering; as
well as other area colleges and universities. The Group sponsors lec-
tures and informal discussions with visiting scholars; discussion groups
focused on Group members' current research; an interdisciplinary, team-
taught, graduate course, "Introduction to Cognitive Science"; a graduate
seminar on current topics and issues in language understanding; and a
Cognitive Science Library.
1985 COLLOQUIA
Our colloquium speakers during 1985 included Andrew Ortony (Psychology,
Illinois), David Waltz (Computer Science, Brandeis), Alice ter Meulen
(Linguistics, Washington), Joan Bybee (Linguistics, SUNY Buffalo), Livia
Polanyi (AI, BBN), Joan Bresnan (Linguistics, Stanford), Leonard Talmy
(Linguistics, Berkeley), Judith Johnston (Communicative Disorders, Indi-
ana), Richard Weist (Psychology, SUNY Fredonia), and Benjamin Kuipers
(AI, Texas).
RESEARCH PROJECT
A research subgroup of the Graduate Group in Cognitive Science is
actively engaged in an interdisciplinary research project investigating
narrative comprehension, specifically the role of a "deictic center".
Grant proposals, conference papers, publications, and several disserta-
tion proposals have come from this collaborative effort. A technical
report describing this project--Bruder et al., "Deictic Centers in Nar-
rative: An Interdisciplinary Cognitive-Science Project," SUNY Buffalo
Department of Computer Science Technical Report No. 86-20--is available
from William J. Rapaport, at the above address.
Specifically, we are developing a model of a cognitive agent's
comprehension of narrative text. Our model will be tested on a computer
system that will represent the agent's beliefs about the objects, rela-
tions, and events in narrative as a function of the form and content of
the successive sentences encountered. In particular, we are concentrat-
ing on the role of spatial, temporal, and focal-character information
for the cognitive agent's comprehension.
We propose to test the hypothesis that the construction and modifi-
cation of a deictic center is of crucial importance for much comprehen-
sion of narrative. We see the deictic center as the locus in conceptual
space-time of the objects and events depicted or described by the sen-
tences currently being perceived. At any point in the narrative, the
cognitive agent's attention is focused on particular characters (and
other objects) standing in particular spatial and temporal relations to
each other. Moreover, the agent "looks" at the narrative from the per-
spective of a particular character, spatial location, or temporal loca-
tion. Thus, the deictic center consists of a WHERE-point, a WHEN-point,
and a WHO-point. In addition, reference to characters' beliefs, per-
sonalities, etc., are also constrained by the deictic center.
We plan to develop a computer system that will "read" a narrative
and answer questions about the deictic information in the text. To
achieve this goal, we intend to carry out a group of projects that will
allow us to discover the linguistic devices in narrative texts, test
their psychological reality for normal and abnormal comprehenders, and
analyze psychological mechanisms that underlie them. Once we have the
results of the individual projects, we will integrate them and work to
build a unified theory and representational system that incorporates the
significant findings. Finally, we will test the system for coherence
and accuracy in modeling a human reader, and modify it as necessary.
COURSEWORK
The Graduate Group in Cognitive Science provides students with the
opportunity for training and research in Cognitive Science at the Ph.D.
level. Students must be residents in a host department (Communicative
Disorders and Sciences, Computer Science, Linguistics, Philosophy,
Psychology), whose requirements must be fulfilled (but which can include
coursework in the other Cognitive Science disciplines), and must meet
certain additional requirements: enrollment in the graduate course,
Introduction to Cognitive Science; and the completion of a "Focus" in
one other participating department. Further details are available from
the Co-Directors of the Group.
The Graduate Group faculty also encourages outstanding undergradu-
ates to develop an interest in Cognitive Science. Qualified undergradu-
ates may request admission to the graduate course (Introduction to Cog-
nitive Science) and can design a major in Cognitive Science under the
Special Majors program at SUNY Buffalo.
------------------------------
Date: Fri, 21 Nov 86 10:54:04 EST
From: "William J. Rapaport" <rapaport%buffalo.csnet@RELAY.CS.NET>
Subject: Graduate Group in Vision at SUNY Buffalo
GRADUATE GROUP IN VISION
STATE UNIVERSITY OF NEW YORK AT BUFFALO
Buffalo, NY 14260
Malcolm Slaughter
Department of Biophysics
Director, 1986-1987
It is becoming increasingly important for vision researchers in diverse
fields to interact, and the SUNY Buffalo Graduate Group in Vision has
been formed to facilitate that interaction. Current membership includes
25 faculty and 25 students from 10 departments (Computer Science,
Electrical and Computer Engineering, Industrial Engineering, Geography,
Psychology, Biophysics, Physiology, Biochemistry, Philosophy, and Media
Studies). The Group organizes a colloquium series and provides central-
ized information about activities both on campus and in the local area
that are of interest to vision researchers.
The Vision Group received formal recognition and funding in April
1986. The 1986-87 activities include: biweekly meetings to discuss the
current research being performed in one of the 20 vision laboratories
represented in the group; an upper division undergraduate/lower-level-
graduate course, which serves as an introduction to interdisciplinary
research in vision; and a colloquium series. This year's speakers
include Jerry Feldman (Computer Science, Rochester), Peter Shiller
(Psychology, MIT), Bela Julesz (Psychology, Bell Labs/Murray Hill),
Tomaso Poggio (AI, MIT; tentative), and Ed Pugh (Biophysics, Pennsyl-
vania; tentative).
------------------------------
Date: 21 Nov 86 04:32:47 GMT
From: rutgers!cbmvax!bpa!burdvax!blenko@SPAM.ISTC.SRI.COM (Tom
Blenko)
Subject: Re: AI and the Arms Race
In article <8611181719.AA00510@watdcsu.uucp> "B. Lindsay Patten"
<shen5%watdcsu.waterloo.edu@RELAY.CS.NET> writes:
[... stuff ...]
|The real point Dr. Weizenbaum was trying to make (in my
|opinion) was that we should weigh the good and bad applications of
|our work and decide which outweighs the other.
If Weizenbaum or anyone else thinks he or she can succeeded in weighing
possible good and bad applications, I think he is mistaken. Wildly
mistaken.
Why does Weizenbaum think technologists are, even within the bounds of
conventional wisdom, competent to make such judgements in the first
place? Everywhere I turn there is a technologist telling me why SDI
cannot succeed -- which tells me that technologists fail to comprehend
consequences of their work from any perspective except their own. Is
it not possible that the principal consequences of SDI will be
something other than an operational defense system?
Why doesn't Weizenbaum do some research and talk about it? Why is
Waterloo inviting him to talk on anything other than his research
results? No reply necessary, but doesn't the fact that technically-
oriented audiences are willing to spend their time listening to this
sort of amateur preaching itself suggest what their limitations are
with regard to difficult ethical questions?
Tom
------------------------------
Date: 22 Nov 86 07:46:41 GMT
From: anderson@unix.macc.wisc.edu (Jess Anderson)
Subject: Re: AI and the Arms Race
In article <2862@burdvax.UUCP>, blenko@burdvax.UUCP (Tom Blenko) writes:
| Why doesn't Weizenbaum do some research and talk about it? Why is
| Waterloo inviting him to talk on anything other than his research
| results? No reply necessary, but doesn't the fact that technically-
| oriented audiences are willing to spend their time listening to this
| sort of amateur preaching itself suggest what their limitations are
| with regard to difficult ethical questions?
Even as a preacher, Weizenbaum is hardly an amateur! Do be fair. On
your last point, I would claim the evidence shows just the opposite
of what you claim, namely that technically-oriented audiences are
willing to spend their time listening to intelligent opinions shows
that they are more qualified than some people think to consider
difficult ethical questions. Of course I am an amateur, too -- of
life (remember what the word means!).
--
==ARPA:====================anderson@unix.macc.wisc.edu===Jess Anderson======
| UUCP: {harvard,seismo,topaz, 1210 W. Dayton |
| akgua,allegra,ihnp4,usbvax}!uwvax!uwmacc!anderson Madison, WI 53706 |
==BITNET:============================anderson@wiscmacc===608/263-6988=======
------------------------------
End of AIList Digest
********************
∂24-Nov-86 0441 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #266
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 24 Nov 86 04:41:43 PST
Date: Mon 24 Nov 1986 00:57-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #266
To: AIList@SRI-STRIPE.ARPA
AIList Digest Monday, 24 Nov 1986 Volume 4 : Issue 266
Today's Topics:
Discussion Lists - Neural Networks Digest & Psychnet,
Call for Papers - Journal of Logic Programming,
Conferences - Workshop on AI in Process Engineering &
IEEE Conference on AI Applications: Advance Program
----------------------------------------------------------------------
Date: Thu, 20 Nov 86 10:41:08 cst
From: Mike Gately 995-3273 M/S 154
<gately%crl1%ti-csl.csnet@RELAY.CS.NET>
Subject: Neural Networks Digest
---- NEW DIGEST ANNOUNCEMENT ----
I am starting up a new e-mail Digest which will cover the
topic of Neural Networks (both real and imagined). The name
of this new mailing list is
---- NEURON ----
You have probably noticed that there has been an increase
in the amount of message traffic regarding connectionism on
this and other digests. The intent is that this digest will
be a focal point for this information.
NEURON is open to discussion of any topic related to neurons.
This should include:
Neural Networks
Algorithms
Software Simulations
Digital Hardware
Analog Hardware
Optical Hardware
Biology
Neurophysiology
Neuroscience
Cellular Automata
As you can see, I am attempting to get some interest from
the 'wet ware' folks. This may be a first; but the results
will surely be interesting.
The official starting date of the mailing list is the first
of December. I am using the US MAIL to inform many of the
researchers in this field that I already know about.
If you are interested in receiving this Digest, reply to
CSNET: NEURON@TI-CSL
ARPANET: NEURON%TI-CSL.CSNET%RELAY.CS.NET@CSNET-RELAY.ARPA
with your current net address. If you expect that a large
number of folks from you site will want to receive this digest,
contact your site postmaster to set up a redistribution file
and have him/her send me a single site address.
As I receive addresses from you, I will try to send out a
Welcome message. If you do not receive this within 4 work days
please resend your request information (I hope this isn't a
mistake).
If you responded to Mitch Wyle's message of 2 weeks ago about
such a Digest, he has forwarded those messages to me.
Regards,
Michael T. Gately
Texas Instruments, Inc.
Advanced Concepts Branch
GATELY%CRL1%TI-CSL.CSNET%RELAY.CS.NET@CSNET-RELAY.ARPA
------------------------------
Date: Sat, 22 Nov 86 13:28:39 CST
From: Psychnet Newsletter and Bulletin Board
Reply-to: EPSYNET%UHUPVM1.BITNET@WISCVM.WISC.EDU
Subject: announcement for AIlist
Persons interested in artificial intelligence who also have
interests in psychology may wish to subscribe to the Psychnet
Newsletter via the net. Contributors from time to time even
include such persons as the (in)famous Stevan Harnad! To
subscribe send your request to:
epsynet%uhupvm1.bitnet@wiscvm.wisc.edu
Thanks, Bob Morecock, Psychnet Editor
------------------------------
Date: Wed, 12 Nov 86 22:01:45 EST
From: Leon Sterling <leon%case.csnet@RELAY.CS.NET>
Reply-to: Leon Sterling <leon%case.csnet@RELAY.CS.NET>
Subject: Call for papers -Journal of Logic Programming
Call for Papers
Papers are requested for a special issue of the Journal of Logic
Programming concerned with
Applications of Logic Programming for Knowledge-Based Systems
The papers should describe applications which exploit special features of
logic programming. Two examples: a problem solved by using a logic
programming language where the solution would be more difficult to state
in another language; or the development of a methodology for the more
effective use of logic programs. The reported research should be original
and should not have appeared elsewhere. Updates of successful, ongoing
projects containing material not otherwise available will also be
considered.
Applications of interest include, but are not limited to:
Financial expert systems Diagnosis systems
Medical expert systems Configuration systems
Expert system tools VLSI design
Natural language programs Problem-solving
Programming environments Learning
Please send 4 copies of your paper by May 31, 1987 to
Leon Sterling,
Department of Computer Engineering and Science,
Case Western Reserve University,
Cleveland, Ohio, USA 44106
Electronic mail address:
CSNET: leon@case
UUCP: ...!decvax!cwruecmp!leon
------------------------------
Date: Thu 20 Nov 86 16:26:38-EST
From: V. Venkatasubramanian <VENKAT@CS.COLUMBIA.EDU>
Subject: Workshop on AI in Process Engineering...
WORKSHOP ON
ARTIFICIAL INTELLIGENCE IN PROCESS ENGINEERING
Place: COLUMBIA UNIVERSITY, Kellogg Auditorium
Date: March 9-10, 1987
Deadline: Dec 22, 1986 for the submission of applications for
attending the workshop.
Sponsored by: American Association for Artificial Intelligence
Air Products
Amoco
1. Themes and Motivations:
In the past few years there has been considerable work in applying recent
advances in Artificial Intelligence to problems in the various disciplines of
engineering. Substantial impact has already been seen in electrical,
mechanical, and civil engineering applications. It is now well-recognized that
the domain of Process Engineering also has much to gain from applications of
AI. Particular attention is being paid to fault diagnosis and control, process
design and planning. Interest in the process engineering community (both in
academia and in industry) is substantial, but only a handful of researchers are
currently engaged in applying AI to problems in process engineering. This is
largely due to lack of proper exposure of this novel area to the rest of the
community. This workshop is being organized to provide this much needed
exposure to researchers in academia and industry.
Thus the workshop will serve the following current needs:
- Bring together for an intense program, people in academia as well as in
industry who are interested in AI in process engineering.
- Disseminate the ideas and techniques of AI in an appropriate form by
relating them to fault diagnosis and control, design and planning
applications in process engineering.
- Provide demonstrations of some expert system prototypes in process
engineering.
- Help resolve the confusion about what AI can do, how to go about applying
AI for process engineering problems, etc.
- To provide a long-term research focus, identify a set of problems that
have important basic research issues, as well as useful practical
components.
2. Workshop Subjects:
* Fault Diagnosis
* Design
* Operations
3. Workshop Speakers:
Chemical Engineering:
Prof. Jim Davis (Ohio State), Prof. Prasad Dhurjati (Delaware)
Prof. George Stephanopoulos (MIT), Prof. V. Venkatasubramanian (Columbia)
Prof. Art Westerberg (Carnegie-Mellon)
Computer Science:
Prof. B. Chandrasekaran (Ohio State), Prof. Ken Forbus (Univ. of Illinois)
Dr. Jeff Pan (Schlumberger Research), Dr. John Kunz (Intellicorp)
4. Workshop Participation:
For the workshop to be intense, stimulating, and useful, we feel that the
number of partcipants must be limited. Hence the number of participants,
besides the invited speakers, will be limited to fifty. Interested parties are
urged to contact one of the members of the organizing committee (given below)
before Dec 22nd by writing a letter describing their background, research
interests, and current process engineering problems they are working on. The
organizing committee will select the participants from the applicant pool.
Participation is by invitation only. The registration fee is $ 150 for the
two-day workshop and will include a copy of the proceedings.
5. Organizing Committee:
Prof. V. Venkatasubramanian, Chairman
Intelligent Process Engineering Laboratory
Department of Chemical Engineering
Columbia University
New York, NY 10027.
(212) 280-4453
Prof. G. Stephanopoulos, Co-Chairman
Laboratory for Intelligent Systems in Process Engineering
Department of Chemical Engineering
Massachusetts Institute of Technology
Cambridge, MA 02139.
(617) 253-3904
Prof. James Davis
Department of Chemical Engineering
Ohio State University
Columbus, OH 43210.
(614) 292-0090
------------------------------
Date: Thu 20 Nov 86 17:28:23-CST
From: Jim Miller <HI.JMILLER@MCC.COM>
Subject: IEEE Conference on AI Applications: Advance Program
THE THIRD IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE APPLICATIONS
Advance Program
Orlando Hyatt Regency
Orlando, Florida
February 23-28, 1987
Sponsored by the Computer Society of the IEEE
For information on any part of the conference, please contact:
The Third IEEE Conference on AI Applications
Computer Society of the IEEE
1730 Massachusetts Avenue NW
Washington DC 20036-1903
202-371-1013
Conference Committee:
General Chair: Program Chairs:
Jan Aikins James Miller
Aion Corporation Elaine Rich
MCC
Tutorials Chair: For the Computer Society
Paul Harmon of the IEEE:
Harmon Associates William Habingreither
Program Committee:
William J. Clancey Keith Clark
Stanford University Imperial College
Byron Davies Michael Fehling
Texas Instruments Rockwell International
Mark Fox Bruce Hamill
Carnegie-Mellon University Applied Physics Laboratory
and Carnegie Group Inc Johns Hopkins University
Peter Hart Elaine Kant
Syntelligence Schlumberger-Doll Research
Paul Kline Benjamin Kupiers
Texas Instruments University of Texas
John McDermott Roy Maxion
Carnegie Mellon University Carnegie Mellon University
Charles Petrie Bruce Porter
MCC University of Texas
John Roach Marty Tenenbaum
Virginia Tech Schlumberger
Harry Tennant Michael D. Williams
Texas Instruments IntelliCorp
==============================================================================
Wednesday, February 25, 1987
==============================================================================
9:00 - 10:00: KEYNOTE ADDRESS
AI and Natural Language in the Real World
Gary Hendrix, Symantec
10:00 - 10:30: BREAK
10:30 - 12:00: INVITED TALKS:
Viewing Knowledge Bases as Qualitative Models
William J. Clancey, Stanford University
Second-Generation Manufacturing Systems
Mark Fox, Carnegie Mellon University and Carnegie Group Inc
10:30 - 12:00: Paper Session 1A: KNOWLEDGE ACQUISITION
Verifying Consistency of Production Systems
T. A. Nguyen, Lockheed
Principles of Design for Knowledge Acquisition
Thomas Gruber, University of Massachusetts
Probabilistic Inference
Won D. Lee, University of Texas at Arlington; Sylvian R. Ray,
University of Illinois
10:30 - 12:00: Paper Session 1B: QUESTION ANSWERING
Question Answering with Rhetorical Relations
Wanying Jin and Robert F. Simmons, University of Texas at Austin
Question-Driven Approach to the Construction of Knowledge-Based Software
Advisor Systems
Patrick Constant, Stainslaw Matwin and Stainslaw Szpakowicz, University of
Ottawa
12:00 - 1:30: LUNCH
1:30 - 3:30: Paper Session 2A: MANUFACTURING
A Knowledge-based Approach to Printing Press Configuration
M. S. Lan, R. M. Panos, and M. S. Balban, Rockwell International
A Knowledge Based Imaging System for Electromagnetic Nondestructive Testing
L. Udpa and W. Lord, Colorado State University
An Object-Based Architecture for Manufactured Parts Routing
R. L. Young, D. M. O'Neill, P. W. Mullarkey, P. C. Gingrich, A. Jain, and
S. Sardana, Schlumberger-Doll Research
Expert System for Visual Solder Joint Inspection
Sandra L. Bartlett, Charles L. Cole, amd Ramesh Jain, University of Michigan
1:30 - 3:30: Paper Session 2B: KNOWLEDGE REPRESENTATION
Breaking the Primitive Concept Barrier
Robert Kass, Ron Katriel, and Tim Finin, University of Pennsylvania
FRAMEWORKS: A Uniform Approach to Knowledge Representation for Natural
Language Processing
Howard R. Smith, Warren H. Harris, and Dan Simmons, United Technologies
Modeling Connections for Circuit Diagnosis
Mingruey R. Taie and Sargur N. Srihari, State University of New York at Buffalo
CONGRES: Conceptual Graph Reasoning System
Anand S. Rao and Norman Y. Foo, University of Sydney
1:30 - 3:30: INVITED PANEL
The Challenges of Integrating AI into Real-Time Control and C↑2
Moderator: J. R. Gersh, Johns Hopkins University Applied Physics Laboratory
3:30 - 4:00: BREAK
4:00 - 5:30: PLENARY PANEL
Programming Languages for AI: Lisp vs. Conventional Languages
Moderator: Mark Miller, Computer * Thought Corporation
==============================================================================
Thursday, February 26, 1987
==============================================================================
9:00 - 10:00: KEYNOTE ADDRESS
Expert Systems in a General Cognitive Architecture
John Laird, University of Michigan
10:00 - 10:30: BREAK
10:30 - 12:00: Paper Session 3A: EXPLANATION-BASED LEARNING
Analyzing Variable Cancellations to Generalize Symbolic Mathematical
Calculations
Jude W. Shavlik and Gerald F. DeJong, University of Illinois
Extending Explanation-Based Learning: Failure-Driven Schema Refinement
Steve A. Chien, University of Illinois
A Learning Apprentice System for Mechanical Assembly
Alberto Maria Segre, University of Illinois
10:30 - 12:00: Paper Session 3B: AI AND REAL-TIME PROGRAMMING
Real Time Process Management for Materials Composition in Chemical
Manufacturing
Bruce D'Ambrosio and Peter Raulefs, FMC Corporation, Michael R. Fehling and
Stephanie Forrest, Teknowledge
Knowledge-Based Experiment Builder for Magnetic Resonance Imaging (MRI) Systems
J. Sztipanovits, C. Biegl, G. Karsai, J. Bourne, C. Harrison, and R. Mushlin,
Vanderbilt University
YES/L1: Integrating Rule-Based, Procedural, and Real-time Programming for
Industrial Applications
A. Cruise, R. Ennis, A. Finkel, J. Hellerstein, D. Loeb, M. Masullo,
K. Milliken, H. Van Woerkom, N. Waite, IBM; D. Klein,
University of Pennsylvania
10:30 - 12:00: INVITED PANEL
Delivery in the Real World
Moderator: Esther Dyson, EDventures Holding, Inc.
12:00 - 1:30: LUNCH
1:30 - 3:30: Paper Session 4A: DIAGNOSIS
LVA: A Knowledge-based System for Diagnosing Faults in Digital Data Loggers
S. C. Laufmann and R. S. Crowder III, Batelle Pacific Northwest Laboratory
A Multiparadigm Knowledge-based System for Diagnosis of Large Mainframe
Peripherals
David W. Rolston, Honeywell
Distributed Diagnosis of Systems with Multiple Faults
Hector Geffner and Judea Pearl, UCLA
Testing, Verifying, and Releasing an Expert System: The Case History of Mentor
Edward L. Cochran and Barbara L. Hutchins, Honeywell
1:30 - 3:30: Paper Session 4B: ROBOTICS AND PERCEPTION
On the Terrain Acquisition by a Point Robot Amidst of Polyhedral Obstacles
Nageswara S. V. Rao, S. S. Iyengar, Louisiana State University;
B. John Oommen, Carelton University; R. L. Kashyap, Purdue University
A Computational Theory and Algorithm for Fluent Reading
Jonathan J. Hull, State University of New York at Buffalo
Automated Reasoning about Machine Geometry and Kinematics
Andrew Gelsey, Yale University
Color Separation Using General-Purpose Computer Vision Algorithms
Deborah Walters, University of Buffalo
1:30 - 3:30: Paper Session 4C: CASE STUDIES
FRESH: A Naval Scheduling System
Michael Babin, Michael Gately, and Michael Sullivan, Texas Instruments
Building Near-Term Fieldable Militry AI Systems: Formalisms and an Example
Mark L. Akey and Kirk A. Dunkelberger, Magnavox
Rule-Based Flexible Control of Tutoring Process in Scene-oriented CAI systems
Ichiro Morihara, Toru Ishida, and Hiroyuki Furuya, NTT Electrical
Communications Laboratory
Abductive and Deductive Inference in an Expert System
Jacqueline A. Haynes and Joshua Lubell, University of Maryland
3:30 - 4:00: BREAK
4:00 - 5:30: PLENARY PANEL
The Future of AI Applications: An Industry Perspective
Panelists: Walden C. Rhines; Texas Instruments, Herbert Schorr, IBM;
Thomas P. Kehler, IntelliCorp
Moderator: Esther Dyson, EDventures Holding, Inc.
==============================================================================
Friday, Feburary 27, 1986
==============================================================================
9:00 - 10:00: KEYNOTE ADDRESS
Overcoming the Brittleness Bottleneck:
Douglas B. Lenat, MCC
10:00 - 10:30: BREAK
10:30 - 12:00: Paper Session 5A: SEARCH
The Cycle-Cutset Method for Improving Search Performance in AI Applications
Rina Dechter and Judea Pearl. UCLA
Schedule Optimization with Probabilistic Search
Lawrence Davis and Frank Ritter, Bolt Beranek and Newman
10:30 - 12:00: Paper Session 5B: UNCERTAINTY
Uncertain Inference Using Belief Functions
Sunggu Lee and Kang G. Shin, University of Michigan
Truth Maintenance with Numeric Certainty Estimates
Bruce D'Ambrosio, FMC Corporation
A Real-Time AI System for Military Communications
M. E. Ulug, General Electric
10:30 - 12:00: INVITED TALKS
Judging the Risk: Expert Systems in Finance
Peter Hart, Syntelligence
Artificial Intelligence: Expectations vs. Reality
Jay M. Tenenbaum, Schlumberger Palo Alto Research
12:00 - 1:30: LUNCH
1:30 - 3:30: Paper Session 6A: DEFAULT REASONING
A Framework for Describing Troubleshooting Behavior Using Default Reasoning
and Functional Abstraction
Michael Young, Stanford University
Assumption Based Reasoning Applied to Personal Flight Planning
Adithya M. Rao and Gautam Biswas, University of South Carolina;
Prasanta K. Bose, Texas Instruments
Classification by Semantic Matching
Paul R. Cohen, Philip M. Stanhope, and Rick Kjeldsen, University of
Massachusetts
Default Reasoning -- Extension and Semantics
Keki B. Irani and Zhaogang Qian, University of Michigan
1:30 - 3:30: Paper Session 6B: DESIGN AND PLANNING
Goal Directed Planning of the Design Process
Christopher Tong, Rutgers University
Concerns: A Means of Identifying Potential Plan Failures
Marc Luria, University of California at Berkeley
A VLSI Design Automation System Using Frames and Logic Programming
Takayoshi Yokota, Keisuke Bekki, and Nobuhiro Hamada, Hitachi Research
Laboratory
PLEX: A Knowledge Based Placement Program for Printed Wire Boards
Sankar Virdhagriswaran, Sam Levine, Scott Fast, and Susan Pitts, Honeywell
1:30 - 3:30: Paper Session 6C: SOFTWARE AND TOOLS
Engineous: A Knowledge Directed Computer Aided Design Shell
Dennis J. Nicklaus, Siu S. Tong, and Carol J. Russo, General Electric
Implementing Distributed AI Systems
Les Gasser, Carl Braganza, and Nava Herman, USC
AI Based Software Maintenance
Lori B. Alperin and Beverly I. Kedzierski, Carnegie Group Inc
Application of Correlation Measures for Validating Structured Selectors
Keith A. Butler, Boeing
==============================================================================
Tutorial Program
==============================================================================
Monday, February 12, 1987
Morning:
Managing Knowledge System Development
Avron Barr, Aldo Ventures
Programming in the Lisp Machine Environment
Sue Green, Texas Instruments
Afternoon:
Analyzing Expert System Building Tools
Paul Harmon, Harmon Associates
Logic Programming, Expert Systems, and Databases
Steve Hardy, Teknowledge
Tuesday, February 13, 1987
Morning:
AI and Computer Integrated Manufacturing
Arvind Sathi, Carnegie Group Inc
Commercial Applications of Natural Language Processing
Tim Johnson, Ovum Ltd.
Afternoon:
AI Programming on Parallel Machines
Joe Brandenburg, Intel
Intelligent Interfaces
Marilyn Stelzner, IntelliCorp
------------------------------
End of AIList Digest
********************
∂25-Nov-86 2314 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #267
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 25 Nov 86 23:12:37 PST
Date: Tue 25 Nov 1986 20:35-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #267
To: AIList@SRI-STRIPE.ARPA
AIList Digest Wednesday, 26 Nov 1986 Volume 4 : Issue 267
Today's Topics:
Queries - Lisp or Smalltalk for Amiga & XLISP 1.8,
Philosophy - Searle, Turing, Nagel
----------------------------------------------------------------------
Date: 24 Nov 86 11:21:33 PST (Monday)
From: Tom.EdServices@Xerox.COM
Subject: Lisp, Smalltalk for Amiga
Does anyone know of Smalltalk or any Lisps (besides Xlisp and Cambridge
Lisp) for the Commodore-Amiga? What I really want is a Common Lisp.
Thanks for any help.
------------------------------
Date: 24 Nov 86 17:42:57 GMT
From: mcvax!ukc!einode!tcdcs!omahony@seismo.css.gov (O'Mahony Donal)
Subject: Looking for source of XLISP 1.8
I am looking for the source of Dave Betz's XLISP version 1.8. This is
a version of LISP with object oriented extensions. I understand that
it is available on the BIX bulletin board, but it is difficult to
gain access from here. I would be grateful if sombody would post a copy
Donal O'Mahony,
Trinity College,
Dublin,
Ireland
------------------------------
Date: 22 Nov 86 21:46:13 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan
Harnad)
Subject: Re: Searle, Turing, Nagel
On mod.ai, rjf@ukc.UUCP <8611071431.AA18436@mcvax.uucp>
Rob Faichney (U of Kent at Canterbury, Canterbury, UK) made
nonspecific reference to prior discussions of intelligence,
consciousness and Nagel. I'm not altogther certain that his
contribution was intended as a followup to the discussion that has
been going on lately under the heading "Searle, Turing, Categories,
Symbols," but since it concerns the issues of that discussion, I am
responding on the assumption that it was. R. Faichney writes:
> [T. Nagel's] paper [See Mortal Questions, Cambridge University Press
> 1979, and The View From Nowhere, Oxford University Press 1986]
> is not ... strictly relevant to a discussion of machine
> intelligence, because what Nagel is concerned with is not intelligence,
> but consciousness. That these are not the same, may be realised on a
> little contemplation. One may be most intensely conscious while doing
> little or no cogitation. To be intelligent - or, rather, to use
> intelligence - it seems necessary to be conscious, but the converse
> does not hold - that to be conscious it is necessary to be intelligent.
> I would suggest that the former relationship is not a necessary one
> either - it just so happens that we are both conscious and (usually)
> intelligent.
It would seem that if you believe that "to use intelligence...it seems
necessary to be conscious" then that amounts to agreeing that Nagel's
paper on consciousness is "relevant to a discussion of machine
intelligence." It is indisputable that intelligence admits of degrees,
both as a stable trait and as a fluctuating state. What is at issue in
discussions of the turing test is not the proposition that consciousness
is the same as intelligence. Rather, it is whether a candidate has
intelligence at all. It seems that consciousness in man is a sufficient
condition for being intelligent (i.e., for exhibiting performance that is
validly described as "intelligent" in the same way we would apply that
term to our own performance). Whether consciousness is a necessary
condition for intelligence is probably undecidable, and goes to the
heart of the mind/body problem and its attendant uncertainties.
The converse proposition -- that intelligence is a necessary condition for
consciousness is synonymous with the proposition that consciousness is
a sufficient condition for intelligence, and this is indeed being
claimed (e.g., by me). The argument runs like this: The issue in
turing-testing is sorting out intelligent performance from its unintelligent
look-alikes. As a completely representative example, consider my asking
you how much 2 + 2 is, and your replying "4" -- as compared to my writing
a computer program whose only function is to put out the symbol "4" whenever
it encounters the string of symbols "How much is 2 + 2?" (this is basically
Searle's point too). There you have it all in microcosm. If the word
"intelligence" has any meaning at all, over and above displaying ANY
arbitrary performance at all (including a rock sliding down a hill, or,
for that matter, a rock NOT sliding down a hill), then we need a principled
way of distinguishing these two cases. That's what the Total Turing
Test I've proposed is meant to do; it amounts to equating
intelligence with total performance capacities indistinguishable from
our own. This also coincides with our only basis for inferring that
anyone else but ourselves has a mind (i.e., is conscious).
There is no contradiction between agreeing that intelligence admits
of degrees and that mind is all-or-none. The Total Turing Test does
not demand the performance capacity of Newton or Bach, only that of an
(undistinguished) person indistinguishable from any other person one might
know for a lifetime. Moreover, the Total Turing Test admits of
variants for other species, although this involves problems of ecological
knowledge and intuitions that humans may lack for any other species but
their own. It even admits of pathological variants of our own species
(retardation, schizophrenia, aphasia, paralysis, coma, etc. as discussed
in other iterations of this discussion, e.g., with J. Cugini) although
here too intuitions and validity probably break down.
> Animals probably are conscious without being intelligent. Machines may
> perhaps be intelligent without being conscious. If these are defined
> seperately, the problem of the intelligent machine becomes relatively
> trivial (though that may seem too good to be true): an intelligent
> machine is capable of doing that which would require intelligence in
> a person, eg high level chess.
Not too good to be true: Too easy. And it would fail to capture
almost all of our relevant pretheoretic generalizations or intuitions.
Animals ARE intelligent (in addition to being conscious), although, as usual,
their intelligence admits of degrees, and can only be validly assessed
relative to their ecological or adaptive contexts (although even
relative to our own ecology, many other species display some degree of
intelligence). The machine intelligence problem -- which is the heart
of the matter -- cannot be settled so quickly and easily. Moreover,
the empirical question of what intelligence is cannot be settled by a
definition (remember "2 + 2 = 4" and the rolling stone, above). Many
intelligent people (with minds) can't play high-level chess, but no
machine can currently do EVERYTHING that the least intelligent of
these people can do. That's the burden of the Total Turing Test.
> Nagel views subjectivity as irreducible to objectivity, indeed the
> latter derives from the former, being a corrected and generalised
> version of it. A maximally objective view of the world must admit
> the reality of subjectivity.
Nagel is one of the few thinkers today who doesn't lapse into
arbitrary hand-waving on the issue of consciousness and its
"reducibility" to something else. Nagel's point is that there is
something it's "like" to have experience, i.e., to be conscious, and
that it's only open to the 1st person point of view. It's hence radically
unlike all other "objective" or "intersubjective" phenomena in science
(e.g., meter-readings), which anyone else can verify as being independent of
one's "point of view" (although Nagel correctly reminds us that even
objectivity is parasitic on subjectivity). The upshot of his analysis
is that utopian scientific mind-science (cognitive science?)
-- that future complete theory that will predict and explain it all --
will be essentially "incomplete" in a way that utopian physics will not be:
Both will successfully predict and explain all their respective observable
(objective) data, but mind-science will be left with something
irreducible, hence unexplained.
For me, this is not a great problem, since I regard the mission of
devising a candidate that can pass the Total Turing Test to be an abundantly
profound and challenging one, and I regard its potential results -- a
functional explanation of the objective features of the mind -- as
sufficiently desirable and useful, so that the part it will FAIL to
explain does not bother me. That may well forever remain philosophy's
province. But I do keep reminding the overzealous that that utopian
mind science will be turing-indistinguishable from a mindless one. I
keep doing this for two reasons: First, because I believe that this
Nagelian point is correct, and worth keeping in mind. And second, because
I believe that attempts to capture or incorporate consciousness in cognitive
science more "directly" are utterly misguided, and lead in the direction of
highly subjective over-interpretations, hermeneutics and self-delusion,
instead of down the only objective scientific road to be traveled: modeling
lifesize performance capacity (i.e., the Total Turing Test). It is for
this reason that I recommend "methodological epiphenomenalism" as a
research strategy in cognitive science.
> So what, really, is consciousness? According to Nagel, a thing is
> conscious if and only if it is like something to be that thing.
> In other words, when it may be the subject (not the object!) of
> intersubjectivity. This accords with Minsky (via Col. Sicherman):
> 'consciousness is an illusion to itself but a genuine and observable
> phenomenon to an outside observer...' Consciousness is not
> self-consciousness, not consiousness of being conscious, as some
> have thought, but is that with which others can identify. This opens
> the way to self-awareness through a hall of mirrors effect - I
> identify with you identifying with me... And in the negative mode
> - I am self-conscious when I feel that someone is watching me.
The Nagel part is right, but unfortunately all the rest
(Minsky/Sicherman/hall-of-mirrors) has it all wrong, and is precisely
the type of lapse into hermeneutics and euphoria I warned against earlier.
The quote above (via the Colonel) is PRECISELY THE OPPOSITE of Nagel's
point. The only aspect of conscious experience that involves direct
observability is the subjective, 1st-person aspect (and the fact THAT I
am having a conscious experience is certainly no illusion since
Descartes at least, although what it tells me about the outside world may be,
at least since Hume). Let's call this private terrain Nagel-land.
The part others "can identify" is Turing-land: Objective, observable
performance (and its structural and functional substrates). Nagel's point
is that Nagel-land is not reducible to Turing-land.
Consciousness is the capacity to have subjective experience (or perhaps
the state of having subjective experience). The rest of the "mirrors"
business is merely metaphor and word-play; such subject matter may make for
entertaining and thought-provoking reading, as in Doug Hofstadter's books,
but it hardly amounts to an objective contribution to cognitive science.
> It may perhaps be supposed that the concept of consciousness evolved
> as part of a social adaptation - that those individuals who were more
> socially integrated, were so at least in part because they identified
> more readily, more intelligently and more imaginatively with others,
> and that this was a successful strategy for survival. To identify with
> others would thus be an innate behavioural trait.
Except that Nagel would no doubt suggest (and I would agree) that
there's no reason to believe that the asocial or minimally social
animals are not conscious too. But apart from that, there's a much
deeper reason why it is probably futile to try to make evolutionary
conjectures about the adaptive function of conscious experience:
According to standard evolutionary theory, the only traits that are
amenable to the kind of trial-and-error selection on the basis of
their consequences for the survival of the organism and propogation of its
genes are (what Nagel would call) OBJECTIVE traits: structure,
function and behavior. Standard evolutionary conjectures about the
putative adaptive function of consciousness are open to precisely the
same objection as the utopian mind-science spoken of earlier:
Evolution is blind to the difference between organisms that are
actually conscious and organisms that merely behave as if they were
conscious. Turing-indistinguishability again. On the other hand, recent
variants of standard evolutionary theory would be compatible with a
NON-selectional origin of consciousness, as an epiphenomenon.
(In pointing out the futility of adaptive scenarios for the origin of
consciousness, I am drawing on my own theoretical failures. I tried
that route in an earlier paper and only later realized that such
"Just-SO" stories suffer from even worse liabilities in speculations
about the evolutionary origins of consciousness than they do in
speculations about the evolutionary origins of behaviors; radically
worse liabilities, for the reason given above. Caveat Emptor.)
> ...When I suppose myself to be conscious, I am imagining myself
> outside myself - taking the point of view of an (hypothetical) other
> person. An individual - man or machine - which has never communicated
> through intersubjectivity might, in a sense, be conscious, but neither
> the individual nor anyone else could ever know it.
I'm afraid you've either gravely misunderstood Nagel or left him far
behind here. When I feel a pain -- when I am in the qualitative state of
knowing what it's like to be feeling a pain -- I am not "supposing"
anything at all. I'm simply feeling pain. If I were not conscious, I
wouldn't be feeling pain, I'd just be acting as if I felt pain. The
same is true of you and of animals. There's nothing social about this.
Nor is "imagination" particularly involved (except perhaps in whatever
external attributions are made to the pain, such as, "there must be something
wrong with my tooth"). Even what is called clinically "imaginary" or
psychosomatic pain -- such as phantom-limb pain or hysterical pain --
is subjectively real, and that's the point: When I'm really feeling
pain, I'm not imagining I'm in pain; I AM in pain.
This is referred to by philosophers as the "incorrigibility" of 1st-person
experience. Although it's not without controversy, it's useful to keep in
mind, because it's what's really at issue in the problem of artificial
minds. We are asking whether candidates have THAT sort of qualitative,
conscious experience. (Again, the "mirror" images about
self-consciousness, etc., are mere icing or fine-tuning, compared to
the more basic issue of whether or not, to put it bluntly, a machine
can actually FEEL pain, or merely ACTS as if it did.)
> Subjectively, we all know that consciousness is real. Objectively,
> we have no reason to believe in it. Because of the relationship
> between subjectivity and objectivity, that position can never be
> improved on. Pragmatism demands a compromise between the two
> extremes, and that is what we already do, every day, the proportion
> of each component varying from one context to another. But the
> high-flown theoretical issue of whether a machine can ever be
> conscious allows no mere pragmatism. All we can say is that we do
> not know, and, if we follow Nagel, that we cannot know - because the
> question is meaningless.
Some crucial corrections that may set the whole matter in a rather different
light: Subjectively (and I would say objectively too), we all know that
OUR OWN consciousness is real. Objectively, we have no way of knowing
that anyone else's consciousness is real. Because of the relationship
between subjectivity and objectivity, direct knowledge of the kind we
have in our own case is impossible in any other. The pragmatic
compromise we practice every day with one another is called the Total
Turing Test: Ascertaining that others behave indistinguishably from our
paradigmatic model for a creature with consciousness: ourselves. We
were bound to come face-to-face with the "high-flown theoretical
issue" of artificial consciousness as soon as we went beyond everyday naive
pragmatic considerations and took on the burden of constructing a
predictive and explanatory causal thoery of mind.
We cannot know directly whether any other organism OR device has a mind,
and, if we follow Nagel, our inferences are not meaningless, but in some
respects incomplete and undecidable.
--
Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
------------------------------
End of AIList Digest
********************
∂26-Nov-86 0131 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #268
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 26 Nov 86 01:30:18 PST
Date: Tue 25 Nov 1986 20:45-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #268
To: AIList@SRI-STRIPE.ARPA
AIList Digest Wednesday, 26 Nov 1986 Volume 4 : Issue 268
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 22 Nov 86 12:13:02 GMT
From: mcvax!lambert@seismo.css.gov (Lambert Meertens)
Subject: Re: Searle, Turing, Symbols, Categories
In article <229@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
> I know directly that my
> performance is caused by my mind, and I infer that my
> mind is caused by my brain. I'll go even further (now that we're
> steeped in phenomenology): It is part of my EXPERIENCE of my behavior
> that it is caused by my mind. [I happen to believe (inferentially) that
> "free will" is an illusion, but I admit it's a phenomenological fact
> that free will sure doesn't FEEL like an illusion.] We do not experience our
> performance in the passive way that we experience sensory input. We
> experience it AS something we (our minds) are CAUSING. (In fact, that's
> probably the source of our intuitions about what causation IS. I'll
> return to this later.)
I hope I am not suffering from a terrible disease like incipient
schizophrenia, but for me it is not the case that I perceive/experience/
am-directly-aware-of my performance being caused by anything. It just
happens. I have some indirect evidence that there is some relation between
the performance I can watch happening and some sensations (such as anxiety
or happiness) that I can somehow experience directly whereas others have no
such direct access and can only infer the presence or absence of these
sensations within me by circumstantial evidence.
How do I know I have a mind? This reminds me of the question put to a
priest (teaching religion) by one of the pupils: "Father, how do we know
that people have a soul?" "Well," said the priest, "here I have a card in
memory of Klaas de Vries. Look, here it says: `Pray for the soul of Klaas
de Vries.' They wouldn't put that there if people had no souls, would
they?" There is something funny with this debate: it is hardly
translatable into Dutch. The problem is that if you look up "mind" in an
English-Dutch dictionary, some eight translations are suggested, none of
which has "mind" as their primary meaning if translated back to English,
except for idiomatic reasons (like in: "So many men, so many minds").
Instead, we find (1) memory; (2) meaning; (3) thoughts; (4) ghost; (5)
soul; (6) understanding; (7) attention; (8) desire. Of these, I contend,
"ghost" and "soul" are closest in meaning if someone says: "I know I have
mind. But how can I know that other people have minds?"
OK, if you substitute "consciousness" for "mind", then this does no
essential harm to the debate and things become translatable to Dutch. What
you gain, is that you loose the suggestion evoked (at least to me) by the
word "mind" that it is something perhaps not quite, but almost, tangible,
something that you could lock up in a box, or cut in three, or take a
picture of with a camera using aura-sensitive film. "Consciousness" is
more like "appetite": you can have it and you can loose it, but even though
it is functionally related to bodily organs, you normally don't think of it
as something located somewhere. Does our appetite cause our eating? ("My
appetite made me eat too much.") How can we know for sure that other
people have appetites as well? I propose to consider the question, "Can
machines have an appetite?"
Now why is consciousness "real", if free will is an illusion? Or, rather,
why should the thesis that consciousness is "real" be more compelling than
the analogous thesis for free will? In either case, the essential argument
is: "Because I [the proponent of that thesis] have direct, immediate,
evidence of it." Sometimes we are conscious of certain sensations. Do
these sensations disappear if we are not conscious of them? Or do they go
on on a subconscious level? That is like the question if a falling tree in
the middle of a forest makes a sound in the absence of creatures capable of
hearing. That is a matter of the most useful (convenient) definition. Let
us agree that the sensations continue at least if it can be shown that the
person involved keeps behaving as if the concomitant sensations continued,
even though professing in retrospection not to have been aware of them. So
people can be afraid without realizing it, say, or drive a car without
being conscious of the traffic lights (and still halt for a red light).
How can you know that you have been conscious of something that you reacted
upon? You stopped in front of a red light (or so others tell you) while
involved in a heated argument. You have no remembrance whatsoever of that
light being red, or of your slowing down (or of having been at that
intersection at all). Maybe your attention was so completely focussed on
the argument that the reaction to the traffic light was fully automatic.
Now someone tells you: No, it wasn't automatic. You muttered something
unfriendly about that other car driver who made as if he was going to drive
on and then suddenly braked. And now, zzzap!, the whole episode pops up in
your mind. You remember that car, the intersection, the traffic light, its
jumping to red, the slight annoyance at not making it, and the anger about
that *@#$%!!! other driver whose car you almost crashed into.
Maybe everything is conscious. Maybe stones are conscious of lying on the
ground, being kicked against, being picked up. Their problem is, they can
hardly tell us. The other problem is, they have no memory (lacking an
appropriate substrate for storing a trace of these experiences). They are
like us with that traffic light, if there hadn't been that other car with
that idiot driver. Even if we experience something consciously, if we
loose all remembrance of it, there is no way in which we can tell for sure
that there was a conscious experience. Maybe we can infer consciousness by
an indirect argument, but that doesn't count. Indirect evidence can be
pretty strong, but it can never give certainty. Barring false memories, we
can only be sure if we remember the experience itself. Now maybe
everything we experience is stored in memory. It may be that we cannot
recall it like that, but using special techniques (hypnosis, electro-
stimulation, mnemonic drugs) it could be retrieved. On the other hand, it
is more plausible that not quite everything is stored in memory, since that
would require a tremendous channel width for storing things, which is not
really functional, or, at least, there are presumably better trade-offs in
terms of survival capability given a limited bran capacity.
If some things we experience do not leave a recallable trace, then why
should we say that they were experienced consciously? Or, why shouldn't we
maintain the position that stones are conscious as well? That position is
maintainable, but it is not very useful in the sense that the word
"consciousness" looses its meaning; it becomes coextensive with
"existence". We "loose" our bicameral minds, Freud, and all that jazz.
More useful, then, to use "consciousness" only for experiences that are,
somehow, recallable. It makes sense that not all, not most of, but some of
the things that go on in our heads are stored away: in order to use for
determining patterns, for better evaluation of the expected outcome of
alternatives, for collecting material that is useful for the construction
or refinement of the model we have of the outside world, and so on.
Being the kind of animal homo is, it also makes sense to store material
that is useful for the refinement of the model we have of our inside world,
that which we think of as "ourselves". After all, we consult that model to
pre-evaluate the outcome of certain alternatives. If we don't "know"
ourselves, we are bound to do things (take on a responsibility, marry
someone, etc., things with a long-term commitment) that will lead us unto
suffering. (We do these things anyway, and one of the causes is that we
don't know ourselves that well.) So a lot of the things that go on "in the
front of our minds" are stored away, and are recallable. And it is only
because of this recallability that we can say that these things were "in
the front of our minds", or "in our minds" at all.
Imagine now a machine programmed to "eat" and also to keep up some dinner
conversation. It has some rules built-in about etiquette like that it is
impolite to eat too much, but also some parameter varying in time to model
"hunger", and a rule IF hunger THEN eat. It just happens that the machine
is very, very hungry. There is a conflict here, but fortunately our
machine is equipped with a conflict-resolution module (CRM) that uses fuzzy
logic to get an outcome for conflicting rules. The outcome here is that
the machine eats more than is polite. The dinner-conversation module (DCM)
has no direct interface with the CRM, but it is supplied with the resultant
behaviour as part of its input data and so it concludes (using the rule
base) that it is not behaving too politely. Speaking anthropomorphically,
we would say that the machine is feeling uneasy about it. Actually, a flag
"uneasiness" is raised, and the DCM is programmed to do something about it.
Using the rule base, the DCM finds a rule that tells it that uneasiness
about being impolite can be reduced by apologizing about it. The apology
submodule (ASM) is invoked, which discovers that a casual apology will do
in this case, one form of which is just to state an appropriate cause for
the inappropriate behaviour. The rule base tells ASM that PROBABLE CAUSE
OF eat IS appetite, (next to tape-worms, but these are measured as less
appropriate under the circumstances), so "<<SELF, having, appetite>;
<goodness, 0.6785>>" is passed back to DCM, which, after invoking
appropriate syntactic transformations, utters the unforgettable words:
"Boy, do I have an appetite today."
How different are we from that machine? If we keep wolfing down food at a
dinner, knowing that we are misbehaving (or just substitute any behaviour
that you are prone to and that you realize is just not quite right--come
on, there must be something), is the choice made the result of a conscious
process? I think it is not. I have no reason to think it is. Even if we
ponder a question consciously ("Whether 'tis nobler in the mind to suffer
..."), I think the outcome is not the result of the conscious process, but,
rather, that the consciousness is a side-effect of the conflict-resolution
process going on. I think the same can be said about all "conscious"
processes. The process is there, anyway; it could (in principle) take
place without leaving a trace in memory, but for functional reasons it does
leave such a trace. And the word we use for these cognitive processes that
we can recall as having taken place is "conscious".
We can as it were instantly focus our attention on things that we are not
conscious of most of the time (the sensation of sitting on a chair, the
colour of the sky). This means merely that we can influence which part of
the processes going on all the time get the preferential treatment of being
stored away for future reference. The ability to do so is clearly
functional, notwithstanding the fact that we can make a non-functional use
of it. This is not different from the fact that it is functional that I
can raise my arm by "willing" it to raise, although I can use that ability
to raise it gratuitously. If the free will here is an illusion (which I
think is primarily a matter of how you choose to define something as
elusive as "free will"), then so is the free will to direct your attention
now to this, then to that. Rather than to say that free will is an
"illusion", we might say that it is something that features in the model
people have about "themselves". Similarly, I think it is better to say
that consciousness is not so much an illusion, but rather something to be
found in that model. A relatively recent acquisition of that model is
known as the "subconscious". A quite recent addition are "programs",
"sub-programs", "wrong wiring", etc.
A sufficiently "intelligent" machine, able to pass not only the dinner-
conversation test but also a sophisticated Turing test, must have a model
of itself. Using that model, and observing its own behaviour (including
"internal" behaviour!), it will be led to conclude not only that it has an
appetite, but also volition and awareness, and it will probably attribute
some of its darker sides (about which it comes to conclude that it feels
guilt, from which it deduces that it has a conscience) to lack of affection
in childhood or "wrong wiring". Is it mistaken then? Is the machine taken
in by an illusion?
I propose to consider the question, "Can machines have illusions?"
--
Lambert Meertens, CWI, Amsterdam; lambert@mcvax.UUCP
------------------------------
Date: 21 Nov 86 19:08:02 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories: Reply to Cugini (2)
[Part I. See the next digest for the conclusion. -- KIL]
On mod.ai <8611200632.AA19202@ucbvax.Berkeley.EDU> "CUGINI, JOHN"
<cugini@nbs-vms.ARPA> wrote:
> I know I have a mind. In order to determine if X
[i.e., anyone else but myself]
> has a mind I've got to look for analogous
> external things about X which I know are causally connected with mind
> in *my own* case. I naively know (and *how* do I know this??) that large
> parts of my performance are an effect of my mind. I scientifically
> know that my mind depends on my brain. I can know this latter
> correlation even *without* performance correlates, eg, when the dentist
> puts me under, I can directly experience my own loss of mind which
> results from loss of whatever brain activity. (I hope it goes
> without saying that all this knowledge is just regular old
> reliable knowledge, but not necessarily certain - ie I am not
> trying to respond to radical skepticism about our everyday and
> scientific knowledge, the invocation of deceptive dentists, etc.)
These questions and reflections are astute ones, and very relevant to
the issues under discussion. It is a matter of some ancillary interest
that the people who seem to be keeping their heads more successfully
in the debates about artificial intelligence and (shall we call it)
"artificial consciousness" are the more sceptical ones, as you reveal
yourself to be at the end of this module. The zealous advocates, on
the other hand, seem to be more prone to flights of
over-interpretative fancy, leaving critical judgment by the wayside.
(This is not to say that some of the more dogged critics haven't waxed
irrational in their turn too.)
Now on to the substance of your criticism. I think the crucial points
will turn on the difference between what you call "naively know" and
"scientifically know." It will also involve (like it or not) the issue
of radical scepticicm, uncertainty and the intersubjectivity and validity of
inferences and correlations. Now, I am neither an expert in, nor an advocate
of, phenomenological introspection, but if you will indulge me and do
a little of it here, I think you will notice that there is something very
different about "naive knowing" as compared to "scientific knowing."
Scientific knowing is indirect and inferential. It is based on
inference to the best explanation, the weight of the evidence, probability,
Popperian (testability, falsifiability) considerations, etc. It is the
paradigm for all empirical inquiry, and it is open to a kind of
radical scepticism (scepticism about induction) that we all reasonably
agree not to worry about, except insofar as noting that scientific
"knowledge" is not certain, but only highly likely on the evidence,
and is always in principle open to inductive "risk" or falsification
by future evidence. This is normal science, and if that were all there
was to the special case of the mind/body problem (or, more perspicuously,
the other-minds problem) then a lot of the matters we are discussing
here could be settled much more easily.
What you call "naive knowing," on the other hand (and about which you
ask "*how* do I know this?") is the special preserve of 1st-hand,
1st-person subjective experience. It is "privileged" (no one has
access to it but me), direct (I do not INFER from evidence that I am
in pain, I know it directly), and it has been described as
"incorrigible" (can I be wrong that I am feeling pain?). The
inferences we make (about the outside world, about inductive
regularities, about other minds) are open to radical scepticism, but
the phenomenological content of 1st-hand experience is different. This
makes "naive knowing" radically different from "scientific knowing."
(Let me add a quick parenthetical remark, but not pursue it unless
someone brings it up: Even our inferential knowledge depends on our
capacity for phenomenological experience. Put another way: we must
have direct experience in order to make indirect inferences, otherwise
the inferences would have no content, whether right or wrong. I
conjecture that this is significantly connected with what I've called
the "grounding" problem that lies at the root of this discussion. It
is also related to Locke's (inchoate) distinction between primary and
secondary qualities, turning his distinction on its head.)
Now let's go on. You say that I "naively know" that my performance
is caused by my mind and I "scientifically know" that my mind is caused
by my brain. (Let's not quibble about "cause"; the other words, such
as "determined by," "a function of," "supervenient on," or Searle's
notorious "caused-by-and-realized-in" are just vague ways of trying to
finesse a problematic and unique relationship otherwise known as the
mind/body problem. Let's just bite the bullet with "cause" and see
where that gets us.) Let me translate that: I know directly that my
performance is caused by my mind, and I infer that my
mind is caused by my brain. I'll go even further (now that we're
steeped in phenomenology): It is part of my EXPERIENCE of my behavior
that it is caused by my mind. [I happen to believe (inferentially) that
"free will" is an illusion, but I admit it's a phenomenological fact
that free will sure doesn't FEEL like an illusion.] We do not experience our
performance in the passive way that we experience sensory input. We
experience it AS something we (our minds) are CAUSING. (In fact, that's
probably the source of our intuitions about what causation IS. I'll
return to this later.)
So there is a very big difference between my direct knowledge that my
mind causes my behavior and my inference (say, in the dentist's chair)
that my brain causes my mind. [Even my rational inference (at the
metalevel) that my mind doesn't really cause my behavior, that that's
just an illusion, leaves the incorrigible phenomenological fact that I
know directly that that's not the way it FEELS.] So, to put it briefly,
what I've called the "informal component" of the Total Turing Test --
does the candidate act as if it had a mind (i.e., roughly as I would)? --
appeals to precisely those intuitions, and not the inferential kind, about
brains, etc. Note, however, that I'm not claiming we have direct
knowledge of other minds. That's just an inference. But it's not the
same kind of inference as the inference that there are, say, quarks, or
cosmic strings. We are appealing, in the informal TTT, to our
intuitions about subjectivity, not to ordinary, objective scientific
evidence (such as brain-correlates).
As a consequence (and again I invite you to do some introspection), the
intuitive force of the direct knowledge that I have (or am) a mind, and
that that causes my behavior, is of an entirely different order from my
empirical inference that I have a brain and that that causes my mind.
Consider, for example, that there are plenty of people who doubt that
their brains are the true causes of their minds, but very few (like
me) who venture to doubt that their minds cause their behavior; and I
confess that I am not very successful in convincing myself, because my
direct experience keeps contradicting my inference, incorrigibly.
In summary: There is a vast difference between knowing causes directly and
inferring them; subjective phenomena are unique and radically different from
other phenomena in that they confer this direct certainty; and
inferences about other minds (i.e., about subjective phenomena in
others) are parasitic on these direct experiences of causation, rather
than on ordinary causal inference, which carries little or no
intuitive force in the case of mental phenomena, in ourselves or
others. And rightly not, because mind is a private, direct, subjective
matter, not something that can be ascertained -- even in the normal
inductive sense -- by public, indirect, objective correlations.
[To be continued ...]
Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
------------------------------
End of AIList Digest
********************
∂26-Nov-86 0358 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #269
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 26 Nov 86 03:58:35 PST
Date: Tue 25 Nov 1986 20:49-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #269
To: AIList@SRI-STRIPE.ARPA
AIList Digest Wednesday, 26 Nov 1986 Volume 4 : Issue 269
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 21 Nov 86 19:08:02 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories: Reply to Cugini (2)
[Part II]
If you want some reasons why the mind/body case is so radically
different from ordinary causal inference in science, here are two:
(1) Generalizations about correlates of having a mind
are, because of the peculiar nature of subjective, 1st-person
experience, always doomed to be based on an N = 1. We can have
intersubjective agreement about a meter-reading, but not about a
subject's experience. This already puts mind-science in a class by
itself. (One can even argue that the intersubjective agreement on
"objective" meter readings is itself parasitic on, or grounded in,
some turing-equivalence assumptions about other people's reports of
their experiences -- of meter readings!)
But, still more important and revealing: (2) Consider ordinary scientific
inferences about "unobservables," say, about quarks (if they should continue
to play an inferred causal role in the future, utopian, "complete"
explanatory/predictive theory in physics): Were you to subtract this
inferred entity from the (complete) theory, the theory would lose its
capacity to account for all the (objective) data. That's the only
reason we infer unobservables in the first place, in ordinary
science: to help predict and causally explain all the observables.
A complete, utopian scientific theory of the "mind," in radical
contrast with this, will always be just as capable of accounting
for all the (objective) data (i.e., all the observable data on what
organisms and brains do) WITH or WITHOUT positing the existence of mind(s)!
In other words, the complete explanatory/predictive theory of organisms
(and devices) WITH minds will be turing-indistinguishable from the
complete explanatory/predicitive theory of organisms (and devices)
WITHOUT minds, that simply behave in every observable way AS IF they
had minds.
That kind of inferential indeterminacy is a lot more serious than the
underdetermination of ordinary scientific inferences about
unobservables like quarks, gravitons or strings. And I believe that this
amounts to a demonstration that all ordinary inferential bets (about
brain-correlates, etc.) are off when it comes to the mind.
The mind (subjectivity, consciousness, the capacity to have
qualitative experience) is NEITHER an ordinary, intersubjectively
verifiable objectively observable datum, as in normal science, NOR is
it an ordinary unobservable inferred entity, forced upon us so that we
can give a successful explanatory/predictive account of the objective
data.
Yet the mind is undoubtedly real. We know that, noninferentially, for
one case: our own. It is to THAT direct knowledge that the informal component
of the TTT appeals, and ONLY to that knowledge. Any further indirect
inferences, based on, say, correlations, depend ultimately for their
validation only on that direct knowledge, and are always secondary to
it, in that split inferences are always settled by an appeal to the
TTT criterion, not vice versa (or some third thing), as I shall try to
show below.
(The formal component of the TTT, on the other hand [i.e., the formal
computer-testing of a theory that purports to generate all of our
performance capacities], IS just a case of ordinary scientific
inference; here it is an empirical question whether brain correlates
will be helpful in guiding theory-construction. I happen to
doubt they will be helpful even there; not, at least until we
get much closer to TTT utopia, when we've all but captured
total performance capacity, and the fine-tuning [errors, reaction
times, response style, etc.] may begin to matter. There, as I've
suggested, the boundary between organism-performance and
brain-performance may break down somewhat, and microfunctional and
structural considerations may become relevant to the success and
verisimilitude of the performance modeling itself.
> Now then, armed with the reasonably reliable knowledge that in my own
> case, my brain is a cause of my mind, and my mind is a cause of my
> performance, I can try to draw appropriate conclusions about others.
As I've tried to argue, these two types of knowledge are so different
as to be virtually incommensurable. In particular, your knowledge that
your brain causes your performance is direct and incorrigible, whereas
your knowledge that your brain causes your mind is indirect,
inferential, and parasitic on the former. Inferences about other minds
are NOT ordinary cases of scientific inference. The mind/body case is
special.
> X3 has brains, but little/no performance - eg a case of severe
> retardation. Well, there doesn't seem much reason to believe that
> X has intelligence, and so is disqualified from having mind, given
> our definition. However, it is still reasonable to believe that
> X3 might have consciousness, eg can feel pain, see colors, etc.
For the time being, intelligence is as mind does. X3 may not be VERY
intelligent, but if he has any mind-like performance capacity (to pass
some variant of the TTT for some organism or other -- a tricky issue),
that amounts to having some intelligence. As discussed in another
module, intelligence may be a matter of degree, but having a mind
seems to be an all-or-none matter. Also, having a mind seems to be a
sufficient condition for having intelligence; if it's not also a
necessary condition, we have the radical indeterminacy I mentioned
earlier, and we're in trouble.
So the case of severe retardation seems to represent no problem.
Retarded people pass (some variant of) the TTT, and we have no trouble
assigning them minds. This is fine as long as they have some (shall we
call it "intelligible") performance capacity, and hence some
intelligence. Comatose people are another matter. But they may well
not have minds. (I might add that our inclination to assign a mind to
a person who is so retarded that his performance capacity is reduced
to vegetative functions such as blinking, breathing and swallowing,
could conceivably be an overgeneralization, motivated by considerations
of biological origins and humanitarian concerns.) I repeat, though,
that these special cases belong more to the domain of near-utopia
fine-tuning than the basic issue of whether it is performance or brain
correlates that should guide us in inferring minds in others. Certainly
neither TTT-enthusiasts nor brain-enthusiasts have any grounds for
feeling confident about their judgments in such ambiguous cases.
> X4 has normal human cognitive performance, but no brains, eg the
> ultimate AI system. Well, no doubt X4 has intelligence, but the issue
> is whether X4 has consciousness. This seems far from obvious to me,
> since I know in my own case that brain causes consciousness causes
> performance. But I already know, in the case of X4, that the causal
> chain starts out at a different place (non-brain), even if it ends up
> in the same place (intelligent performance). So I can certainly
> question (rationally) whether it gets to performance "via
> consciousness" or not.
> If this seems too contentious, ask yourself: given a choice between
> destroying X3 or X4, is it really obvious that the more moral choice
> is to destroy X3?
I don't think the moral choice is obvious in either case. However, I
don't think you're imagining this case sufficiently vividly. Let's make
it the one I proposed: A lifelong friend turns out to be robot, versus
a human born (irremediably) with only vegetative function. These issues
are for the right-to-lifers; the alternatives imposed on us are too
hypothetical and artificial (akin to having to choose between saving
one's mother or father). But I think it's fairly clear which way I'd
go here. And what we know (or don't know) about brains has very little
to do with it.
> Finally, a gedanken experiment (if ever there was one) - suppose
> (a la sci-fi stories) they opened you up and showed you that you
> really didn't have a brain after all, that you really did have
> electronic circuits - and suppose it transpired that while most
> humans had brains, a few, like yourself, had electronics. Now,
> never doubting your own consciousness, if you *really* found that
> out, would you not then (rationally) be a lot more inclined to
> attribute consciousness to electronic entities (after all you know
> what it feels like to be one of them) than to brained entities (who
> knows what, if anything, it feels like to be one of them?)?
> Even given *no* difference in performance between the two sub-types?
> Showing that "similarity to one's own internal make-up" is always
> going to be a valid criterion for consciousness, independent of
> performance.
Frankly, although it might disturb me for other reasons, I think that
discovering I had complex, ill-understood electronic cicuits inside my
head instead of complex, ill-understood biochemical ones would not
sway me one way or the other on the basic proposition that it is
performance alone that is responsible for my inferring minds in other
people, not my (or anyone else's) dim knowledge about their inner
structure of function. I agreed in an earlier module, though, that
such a demonstration would be a bit of a blow to the sceptics about robots
(which I am not) if they discovered THEMSELVES to be robots. On the
other hand, it wouldn't move an outside sceptic one bit. For example,
*you* would presumably be unifluenced in your convictions about the
relevance of brain-correlates over and above performance if *I* turned
out to be X4. And that's just the point! Like it or not, the
1st-person stance retains center stage in the mind/body problem.
> I make this latter point to show that I am a brain-chauvinist *only
> insofar* as I know/believe that I *myself* am a brained entity (and
> that my brain is what causes my consciousness). This really
> doesn't depend on my own observation of my own performance at all -
> I'd still know I had a mind even if I never did any (external) thing
> clever.
Yes. But the problem for *you* is whether *I* (or some other candidate)
have a mind, not whether *you* do. Moreover, no one suggested that the
turing test was the basis for knowing one has a mind in the 1st person
case. That problem is probably closer to the Cartesian Cogito, solved
directly and incorrigibly. The other-minds problem is the one we're
concerned with here.
Perhaps I should emphasize that in the two "correlations" we are
talking about -- performance/mind and brain/mind -- the basis for the
causal inference is radically different. The causal connection between
my mind and my performance is something I know directly from being the
performer. There is no corresponding intuition about causation from
being the possessor of my brain. That's just a correlation, depending
for its causal interpretation (if any), on what theory or metatheory I
happen to subscribe to. That's why nothing compelling follows from
being told what my insides are made of.
> To summarize: brainedness is a criterion, not only via the indirect
> path of: others who have intelligent performance also have brains,
> ergo brains are a secondary correlate for mind; but also via the
> much more direct path (which *also* justifies performance as a
> criterion): I have a mind and in my very own case, my mind is
> closely causally connected with brains (and with performance).
I would summarize it differently: In the 1st-person case, I know directly
that my performance is caused by my mind. I infer (from the correlation)
that my brain causes my mind. In the other-minds case I know nothing
directly; however, I am intuitively persuaded by performance similarity.
I have no intuitions about brains, but of course every confirmatory
cue helps; so if you also have a brain, my confidence is increased.
But split the ticket, and I'll go with performance every time. That
makes it seem as if performance is still the decisive criterion, and
brainedness is only a secondary correlate.
Putting it yet another way: We have direct knowledge of the causal
connection between our minds and our performance and only indirect
inferences about the causal connection between our brains and our
minds (and performance). This parasitism is hence present in our
inferences about other minds too.
> I agree that there are some additional epistemological problems,
> [with subjective/objective causation, as opposed to
> objective/objective causation, i.e., with the mind/body problem]
> compared to the usual cases of causation. But these don't seem
> all that daunting, absent radical skepticism.
But "radical" scepticism makes an unavoidable, substantive appearance
in the contemporary scientific incarnation of the other-minds problem:
The problem of robot minds.
> We already know which parts of the brain
> correlate with visual experience, auditory experience, speech
> competence, etc. I hardly wish to understate the difficulty of
> getting a full understanding, but I can't see any problem in
> principle with finding out as much as we want. What may be
> mysterious is that at some level, some constellation of nerve
> firings may "just" cause visual experience, (even as electric
> currents "just" generate magnetic fields.) But we are
> always faced with brute-force correlation at the end of any scientific
> explanation, so this cannot count against brain-explanatory theory of
> mind.
There is not quite as much disagreement here as there may seem. We
agree on (1) the basic mystery in objective/subjective causation -- though I
disagree that it is no more mysterious than objective/objective
causation. Never mind. It's mysterious. I also agree that (2) I would
feel (negligibly) more confident in inferring that a candidate who
passed the TTT had a mind if it had a real brain than if it did not.
(I'd feel even more confident if it was my identical twin.) We agree
that (3) the brain causes the mind, that (4) the brain can be studied,
that (5) there are anatomical and physiological correlations
(objective/subjective), and that (6) these are very probably causal.
Where we may disagree is on the methodology for arriving at a causal theory
of mind. I don't think peeking-and-poking at the brain in search of
correlations is likely to generate a successful causal theory; I think
trial-and-error modeling of performance will, and that it will in fact
guide brain research, suggesting what functions to look for
implementations of, and how they cause performance. What I believe
will fall by the wayside in this brute-force correlative account --
I'm for correlations too, of course, except that I'm for
objective/objective correlations -- is subjectivity itself. For, on
all the observable evidence that will ever be available, the
complete theory of the mind -- whether implemented as a brain or as some
other artificial causal device -- will always be just as true of a
device actually having a mind as of a mindless device merely acting as
if it had a mind. And there will be no way of settling this, short of
actually BEING the device in question (which is no help to the rest of
us). If that's radical scepticism, it's come home to roost, and should
be accepted as a fact of life in mind-science. (I've dubbed this
"methodological epiphenomenalism" in the paper under discussion.)
You may feel more confident in attributing a mind to the
brain-implementation than to a synthetic one (though I can't imagine you'll
have good reasons, since they'll be functionally equivalent in every
observable and ostensibly relevant respect), but that too is a
question we will never be able settle objectively.
(Let me add, in case it's not apparent, that performances such as
reporting "It hurts now" are perfectly respectable, objective data,
both for the brain-correlation investigator and the mind-modeler. So
whereas we can never investigate subjectivity directly except in our
own case, we can approximate its behavioral manifestations as closely
as the expressive power of introspective reports will allow. What's
not clear is how useful this aspect of performance modeling will be.)
> Well, I plead guilty to diverting the discussion into philosophy, and as
> a practical matter, one's attitude in this dispute will hardly affect
> one's day-to-day work in the AI lab. One of my purposes is a kind of
> pre-emptive strike against a too-grandiose interpretation of the
> results of AI work, particularly with regard to claims about
> consciousness. Given a behavioral definition of intelligence, there
> seems no reason why a machine can't be intelligent. But if "mind"
> implies consciousness, it's a different ball-game, when claiming
> that the machine "has a mind".
I plead no less guilty than you. Neither of us is responsible for the
fact that scepticism looms large in making inferences about other
minds and how they work, which is what cognitive science is about. I
do disagree, though, that these considerations are irrelevant to one's
research strategy. It does matter whether you choose to study the
brain directly, or to model it, or to model performance-equivalent
alternatives. Other issues in this discussion matter too: modeling
toy modules versus the Total Turing Test, symbolic modeling versus
robotic modeling, and the degree of attention focused on modeling
phenomenological reports.
I also agree, of course, about the grandiose over-interpretation of
which AI (and, lately, connectionism too) has been guilty. But in the
papers under discussion I try to propose principled constraints (e.g.,
robotic capacity, groundedness, nonmodularity and the Total Turing
Test) that might restrain such excesses, rather than merely scepticism
about artificial performance. I also try to sort out the empirical
issues from the methodological and metaphysical ones. And, as I've
argued in several iterations, "inetlligence" is not just a matter of
definition.
> My as-yet-unarticulated intuition is that, at least for people, the
> grounding-of-symbols problem, to which you are acutely and laudably
> sensitive, inherently involves consciousness, ie at least for us,
> meaning requires consciousness. And so the problem of shoehorning
> "meaning" into a dumb machine at least raises the issue about how
> this can be done without making them conscious (or, alternatively,
> how to go ahead and make them conscious). Hence my interest in your
> program of research.
Thank you for the kind words. One of course hopes that consciousness
will be captured somewhere along the road to Utopia. But my
methodological epiphenomenalism suggests that this may be an undecidable
metaphysical problem, and that, empirically and objectively, total
performance capacity is the most we can know ("scientifically") that
we have captured.
--
Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
------------------------------
End of AIList Digest
********************
∂30-Nov-86 1623 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #270
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 30 Nov 86 16:22:52 PST
Date: Sun 30 Nov 1986 14:01-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #270
To: AIList@SRI-STRIPE.ARPA
AIList Digest Monday, 1 Dec 1986 Volume 4 : Issue 270
Today's Topics:
Course - AI in Design and Manufacturing (MIT),
Seminars - Computational Problems in Equational Theorem Proving (UPenn) &
LISP on a Reduced-Instruction-Set Processor (SU) &
RUM: Reasoning with Uncertainty (CMU) &
Graphical Access to an Expert System (UPenn) &
Disassembly Expert (CMU)
----------------------------------------------------------------------
Date: Sat, 29 Nov 86 14:15:29 EST
From: "Steven A. Swernofsky" <SASW@MX.LCS.MIT.EDU>
Subject: Course - AI in Design and Manufacturing (MIT)
From: Neena Lyall <LYALL at XX.LCS.MIT.EDU>
New Seminar Course Spring 1987
2.996 Advanced Topics in Mechanical Engineering (A); Section 2
ARTIFICIAL INTELLIGENCE IN
DESIGN & MANUFACTURING
Prerequisite: 1.00, 2.10, or 6.001
Units: 2-0-7
Date, time: Wednesday, 1-3 pm
Place: 37-212
Applications of artificial intelligence to selected domains of engineering.
Discussions will focus on the principles, strengths and limitations of existing
techniques as well as present and future applications. Topics of coverage
include: knowledge representation issues and techniques, logic programming,
expert systems, machine learning, and application areas. Format: class
discussions, midterm paper and final project.
For further information, contact Prof. S. H. Kim, x3-2249, Room 35-237.
------------------------------
Date: Thu, 20 Nov 86 23:34 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Computational Problems in Equational Theorem
Proving (UPenn)
CIS COLLOQUIUM
University of Pennsylvania
3pm November 15, 1986
216 Moore School
COMPUTATIONAL PROBLEMS IN EQUATIONAL THEOREM PROVING
Dr. Paliath Narendran
General Electric Research Laboratory
The area of Equational Reasoning has recently gained a lot of attention
and has been found to have applications in such diverse areas as program
synthesis and data base queries. Most of these applications are centered
around using the equations as "rewrite rules" and, in particular, using
the Knuth-Bendix completion procedure to generate a "complete" set of
such rewrite rules. The power of the completion procedure lies in the
fact that once a complete set of rewrite rules is obtained, we also have
a decision procedure for the equational theory. We discuss some of the
main computational problems involved in this area such as unification,
matching and sufficient-completeness testing and outline the decidability
and complexity results.
------------------------------
Date: Mon, 24 Nov 86 10:44:46 PST
From: Peter Steenkiste <pas@mojave.stanford.edu>
Subject: Seminar - LISP on a Reduced-Instruction-Set Processor (SU)
Special Seminar: Ph.D. Orals
LISP on a Reduced-Instruction-Set Processor:
Characterization and Optimization
Peter Steenkiste
Computer Systems Laboratory
Department of Electrical Engineering
Stanford University
Abstract
As a result of advances in compiler technology, almost all programs
are written in high-level languages, and the effectiveness of a
computer architecture is determined by its suitability as a compiler
target. This central role of compilers in the use of computers has
led computer architects to study the implementation of high-level
language programs. This thesis presents profiling measurements for
a set of Portable Standard Lisp programs that were executed on the
MIPS-X reduced-instruction-set processor, examining what instructions
LISP uses at the assembly level, and how much time is spent on the
most common primitive LISP operations. This information makes it
possible to determine which operations are time critical and to
evaluate how well architectural features address these operations.
The second part of the thesis will discuss a number of optimizations
for LISP, concentrating on three areas: the implementation of the
tags used for runtime type checking, reducing the cost of procedure
calls, and inter-procedural register allocation. We look at methods
to implement tags, both with and without hardware support, and we
compare the performance of the different implementation strategies.
We show how the procedure call cost can be reduced by inlining small
procedures, and how inlining affects the miss rate in the MIPS-X
on-chip instruction cache. A simple register allocator uses inter-
procedural information to reduce the cost of saving and restoring
registers across procedure calls. We evaluate this register allocation
allocation scheme, and compare its performance with hardware register
windows.
Time: Monday, December 8, 1986, 4:15pm
Place: CIS Building, Room 101
Cookies will be served!
------------------------------
Date: 25 November 1986 0948-EST
From: Masaru Tomita@A.CS.CMU.EDU
Subject: Seminar - RUM: Reasoning with Uncertainty (CMU)
RUM: A Layered Architecture for Reasoning with Uncertainty
Piero P. Bonissone
General Electric Corporate Research and Development
P.O. Box 8, K1-5C32A, Schenectady, New York 12301
3:30pm, WeH5409
New reasoning techniques for dealing with uncertainty in Expert Systems
have been embedded in RUM, a Reasoning with Uncertainty Module. RUM is an
integrated software tool based on a frame system (KEE) that is implemented
in an object oriented language. RUM's capabilities are subdivided into
three layers: Representation, Inference, and Control.
The Representation layer is based on frame-like data structures that
capture the uncertainty information used in the inference layer and the
uncertainty meta-information used in the control layer. Linguistic
probabilities are used to describe lower and upper bounds of the certainty
measure attached to a Well Formed Formula (wff). The source and the
conditions under which the information was obtained represent the
non-numerical meta-information.
The Inference layer provides the uncertainty calculi to perform the
intersection, detachment, union, and pooling of the information. Five
uncertainty calculi, based on their underlying Triangular norms (T-norms),
are used in this layer.
The Control layer uses the meta-information to select the appropriate
calculus for each context and to resolve eventual ignorance or conflict in
the information. This layer enables the programmer to declaratively
express the local (context dependent) meta-knowledge that will substitute
the global assumptions traditionally used in uncertain reasoning.
RUM has been tested and validated in a sequence of experiments in naval
situation assessment (SA). These experiments consists in determining
report/track correlation, platform location, and platform typing. The
testbed environment for developing these experiments has been provided by
LOTTA, a symbolic simulator implemented in Zetalisp Flavors, the object
oriented language of the Lisp Machine. This simulator maintains
time-varying situations in a multi-player antagonistic game where players
must make decisions in light of uncertain and incomplete data. RUM has
been used to assist one of the LOTTA players to perform the SA task.
- - - - End forwarded message - - - -
------------------------------
Date: Wed, 26 Nov 86 00:10 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Graphical Access to an Expert System (UPenn)
COLLOQUIUM - UNIVERSITY of PENNSYLVANIA
3pm Tuesday, December 2, 1986
Room 216 Moore School
GRAPHICAL ACCESS TO AN EXPERT SYSTEM:
THE EVOLUTION OF THE ONCOCIN PROJECT
Ted Shortliffe
Visiting Professor of Computer and Information Science
University of Pennsylvania
and
Associate Professor of Medicine and Computer Science
Medical Computer Science Group
Knowledge Systems Laboratory
Stanford Medical School
The research goals of Stanford's Medical Computer Science group are directed
both toward the basic science of artificial intelligence and toward the
development of clinically useful consultation tools. Our approach has been
eclectic, drawing on fields such as decision analysis, interactive graphics,
and both qualitative and probabilistic simulation as well as AI. In this
presentation I will discuss ONCOCIN, an advice system designed to suggest
optimal therapy for patients undergoing cancer treatment, as well as to
assist in the data management tasks required to support research treatment
plans (protocols). A prototype version, developed in Interlisp and SAIL
on a DEC-20, was used between May 1981 and May 1985 by oncology faculty and
fellows in the Debbie Probst Oncology Day Care Center at the Stanford
University Medical Center. In recent years, however, we have spent much
of our time reimplementing ONCOCIN to run on Xerox 1100 series workstations
and to take advantage of the graphics environment provided on those
machines. The physician's interface has been redesigned to approximate the
appearance and functionality of the paper forms traditionally used for
recording patient status. The Lisp machine version of ONCOCIN was introduced
for use by Stanford physicians earlier this year.
In response to the need for an improved method for entering and maintaining
the rapidly expanding ONCOCIN protocol knowledge base, we have also developed
a graphical knowledge acquisition environment known as OPAL. This system
allows expert oncologists to directly enter their knowledge of protocol-
directed cancer therapy using graphics-based forms developed in the
Interlisp-D environment. The development of OPAL's graphical interface led
to a new understanding of the natural structure of knowledge in this domain.
ONCOCIN's knowledge representation was accordingly redesigned for the Lisp
machine environment. This has involved adopting an object-centered knowledge
base design which has provided an increase in the speed of the program while
providing more flexible access to system knowledge.
------------------------------
Date: 26 Nov 86 22:29:56 EST
From: Sergio.Sedas@fas.ri.cmu.edu
Subject: Seminar - Disassembly Expert (CMU)
Master's Defense
Name: Sergio W. Sedas
Title:Disassembly Expert
Dept: ECE
Date: Dec. 3, 1986
Time: 2:00 DHA219 Engineering Design
Research Center (Demo)
2:30 DH1102 Chemical Engineering
Conference Room (Presentation)
An important part in a redesign for assembly expert is a module which will
autonomously disassemble mechanical objects. Although disassembly is a task
which is easily performed by human beings, it has been a very difficult task
for computers to perform. This paper describes an algorithm which mimics
the human experimental approach to disassemble mechanical assemblies. A
highlight in this approach is the ability to determine when a single part or
a group of parts (subassembly) must be removed.
We have divided the disassembly operation into two basic steps. The first
step selects a part to remove and identifies which parts whose connections
can not be severed must be removed with it. The second step, incorporated
in a path planner, attempts to remove the subassembly. During removal,
obstacles may be added or excluded from the subassembly.
A second contribution is the use of multiple representations for
problem solving. A number of geometric, connectivity and facilities models
are used simultaneously in both steps of the disassembly.
This algorithm has successfully disassembled a flashlight and a piston. By
modifying the objective we've managed to remove an object from a closed
drawer and a chosen part from a within a flashlight assembly.
------------------------------
End of AIList Digest
********************
∂30-Nov-86 1803 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #271
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 30 Nov 86 18:03:32 PST
Date: Sun 30 Nov 1986 14:14-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #271
To: AIList@SRI-STRIPE.ARPA
AIList Digest Monday, 1 Dec 1986 Volume 4 : Issue 271
Today's Topics:
Queries - Conferences in D.C. Area &
Rutgers Seminar on Office Systems &
Good AI Games That Learn,
Administrivia - Proposed Split of this Group,
Ethics - AI and the Arms Race,
Mathematics - P = NP
----------------------------------------------------------------------
Date: 26 NOV 86 18:43-EST
From: WLMILLIO%GALLUD.BITNET@WISCVM.WISC.EDU
Subject: Anything in D.C. area?
All of the conferences that are advertised seem to be very, very far away
from the Wasington, D.C. area. Since I, and I'm sure there are others,
fit the stereotype of the starving student, I would be interested in knowing
if there are any conferences or seminars being offered in the Washington
area, either by some of the Universities here, or by corporations or
professional organizations in the area.
"Acceptable" seminars or conferences really mean anything related to the
field of AI, from expert systems design, to smart programming, to you name
it.
Please respond if you are aware of anything being planned between now
and January of 88.
William Millios
Gallaudet University
Washington, D.C.
WLMILLIOS@GALLUA.BITNET
------------------------------
Date: 0 0 00:00:00 EST
From: "↑MRobert Breaux" <breaux@ntsc-74>
Reply-to: "Robert Breaux" <breaux@ntsc-74>
Subject: RUTGERS SEMINAR ON OFFICE SYSTEMS
I am interested in the topic of Professor Bruce Croft's 10:00 am 20 Nov
talk on Planning and Plan Recognition in Office Systems, although I was
unable to make it. Will there be any subsequent written document
that I could receive?
Robert Breaux
Naval Training Systems Center
Orlando, FL 32813-7100
(305) 646-5529
2
------------------------------
Date: 28 Nov 86 19:18:18 GMT
From: uwslh!lishka@rsch.wisc.edu (a)
Subject: Good AI games which learn.
I am an undergraduate interested in games which learn. Now, I know
that there are a LOT of AI games out there...however, I am looking for source
code, or references to where I can get source code,
on games which actually "learn" (in some sense of the word
...I know I'm being sort of vague, but I'm not sure of what is out there).
These games should be public domain, and something that
can be modified easily (preferably in a block structured language like C [my
favorite] or Pascal or Modula-2). Also, I am not looking for chess games, or
any other games which simulate "intelligence" by performing lookahead...I want
games which will modify their actions according to how good (or bad) the player
is, and which are possibly able to detect patterns in the human player's (or
another computer player's [uh, oh -- Searle, Turing Test...]) "styles".
I know I am being sort of picky, yet vague as to what I want, but I
am interested in game-learning algorithms which have some foundations
in AI (one specific game-type would be those that can converse, yet can add
to their vocabulary...another would be an arcade game which gets "better"
or "worse" depending on who's playing). Please mail responses to me...I'll
post the names of any good programs I receive at a later date. Thanks in
advance.
--
Chris Lishka /lishka@uwslh.uucp
Wisconsin State Lab of Hygiene <-lishka%uwslh.uucp@rsch.wisc.edu
\{seismo, harvard,topaz,...}!uwvax!uwslh!lishka
------------------------------
Date: 29 Nov 86 01:44:08 GMT
From: jbn@glacier.stanford.edu (John B. Nagle)
Subject: Proposed: a split of this group
I would like to suggest that this group be split into two groups;
one about "doing AI" and one on "philosophising about AI", the latter
to contain the various discussions about Turing tests, sentient computers,
and suchlike.
John Nagle
------------------------------
Date: 30 Nov 86 01:48:18 GMT
From: spar!freeman@decwrl.dec.com (Jay Freeman)
Subject: Re: Proposed: a split of this group
I second the motion.
------------------------------
Date: 24 Nov 86 20:34:13 GMT
From: tektronix!tekcrl!tekchips!willc@ucbvax.Berkeley.EDU (Will
Clinger)
Subject: Re: AI and the Arms Race
In article <2862@burdvax.UUCP> blenko@burdvax.UUCP (Tom Blenko) writes:
>If Weizenbaum or anyone else thinks he or she can succeeded in weighing
>possible good and bad applications, I think he is mistaken. Wildly
>mistaken.
>
>Why does Weizenbaum think technologists are, even within the bounds of
>conventional wisdom, competent to make such judgements in the first
>place?
Is this supposed to mean that professors of moral philosophy are the only
people who should make moral judgments? Or is it supposed to mean that
we should trust the theologians to choose for us? Or that we should leave
all such matters to the politicians?
Representative democracy imposes upon citizens a responsibility for
judging moral choices made by the leaders they elect. It seems to me
that anyone presumed to be capable of judging others' moral choices
should be presumed capable of making their own.
It also seems to me that responsibility for judging the likely outcome
of one's actions is not a thing that humans can evade, and I applaud
Weizenbaum for pointing out that scientists and engineers bear this
responsibility as much as anyone else.
By saying this I neither applaud nor deplore the particular moral choices
that Weizenbaum advocates.
William Clinger
------------------------------
Date: 26 Nov 86 17:58:12 GMT
From: eugene@titan.arc.nasa.gov (Eugene Miya N.)
Subject: Re: AI and the Arms Race
>Will Clinger writes:
>In article <2862@burdvax.UUCP> blenko@burdvax.UUCP (Tom Blenko) writes:
>>If Weizenbaum or anyone else thinks he or she can succeeded in weighing
>>possible good and bad applications, I think he is mistaken.
>>
>>Why does Weizenbaum think technologists are, even within the bounds of
>>conventional wisdom, competent to make such judgements in the first
>>place?
>
>Is this supposed to mean that professors of moral philosophy are the only
>people who should make moral judgments? Or is it supposed to mean that
>we should trust the theologians to choose for us? Or that we should leave
>all such matters to the politicians?
>
>Representative democracy imposes upon citizens a responsibility for
>judging moral choices made by the leaders they elect. It seems to me
>that anyone presumed to be capable of judging others' moral choices
>should be presumed capable of making their own.
>
>It also seems to me that responsibility for judging the likely outcome
>of one's actions is not a thing that humans can evade, and I applaud
>Weizenbaum for pointing out that scientists and engineers bear this
>responsibility as much as anyone else.
>
>William Clinger
The problem here began in 1939. It's science's relationship to
the rest of democracy and society. Before that time science was
a minor player. This is when the physics community (on the part of
Leo Szilard and Eugene Wigner) when to Albert Einstein and said:
look at these developments in nuclear energy and look where Nazi Germany
is going. He turn as a public figure (like Carl Sagan in a way)
went to Roosevelt. Science has never been the same. [Note we also
make more money for science from government than ever: note
the discussion on funding math where Halmos was quoted.]
What Tom did not point out is where or not scientists and engineers
have "more" responsibility. Some people say since they are in the know,
they have MORE responsibility, others say, no this is a democracy
they have EQUAL responsibility, but judgments MUST be made by it's
citizens. In the "natural world," many things are not democratic
(is gravity autocratic?)... well these are not the right words but
the illustrate the point that man's ideas are sometimes feeible.
While Weizenbaum may or may not weigh moral values, he is in a
unique position to understand some of the technical issues, and he
should properly steer the understanding of those weighing moral
decisions (as opposed to letting them stray): in other words, yes,
to a degree, he DOES weigh them and yes he DOES color his moral values
into the argument. [The moral equivalent to making moral judgments.]
An earlier posting pointed out the molecular biologists restricting
specific types of work at the Asolimar meeting years ago. In the
journal Science, it was noted that much of the community felt it shot
its foot off, looking back, and that current research is being held back.
I would hope that the AI community would learn from the biologists'
experience and either not restrict research (perhaps too ideal)
or not end up gagging themselves. Tricky issue, why doesn't someone
write an AI program to decide what to do? Good luck.
From the Rock of Ages Home for Retired Hackers:
--eugene miya
NASA Ames Research Center
eugene@ames-aurora.ARPA
"You trust the `reply' command with all those different mailers out there?"
{hplabs,hao,nike,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene
------------------------------
Date: 28 Nov 86 21:29:45 GMT
From: rutgers!clyde!watmath!watnot!watdragon!kyukawa@titan.arc.nasa.go
v (Keitaro Yukawa)
Subject: P = NP
The following news appeared in the news group sci.math
From hxd9622@ritcv.UUCP (Herman Darmawan) Fri Nov 28 02:41:59 1986
Subject: Re: P=NP (request for reference)
Date: 28 Nov 86 07:41:59 GMT
Reply-To: hxd9622@ritcv.UUCP (Herman Darmawan)
Organization: Rochester Institute of Technology, Rochester, NY
P = NP
by
E. R. Swart
Department of Mathematics & Statistics
University of Guelph
Guelph, Ontario, Canada
Mathematical Series 1986-107
February 1986
Abstract:
A mathematical programming formulation of the Hamiltonian
circuit problem involving zero/one restrictions and triply subscripted
variables is presented and by relaxing the zero/one restrictions and
adding linear constraints together with additional variables, with
up to as many as 8 subscripts, this formulation is converted into a
linear programming formulation. In the light of the results of
Kachiyan and Karmarkar concerning the existence of polynomial time
algorithms for linear programming this establishes the fact that the
Hamiltonian circuit problem can be solved in polynomial time. Since
the Hamiltonian circuit problem belongs to the set of NP-complete
problems it follows immediately that P=NP.
~40 pages.
[I believe this has been mentioned here previously, along with
a claim that close examination shows flaws in the argument.
The theoreticians (e.g., on Theory-Net) apparently regard the
P = NP question as still open. -- KIL]
-+-+-+-
Herman Darmawan @ Rochester Institute of Technology
UUCP {allegra,seismo}!rochester!ritcv!hxd9622
BITNET HND9622@RITVAXC
... fight mail hunger ... mail me now!
------------------------------
End of AIList Digest
********************
∂30-Nov-86 1954 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #272
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 30 Nov 86 19:54:38 PST
Date: Sun 30 Nov 1986 14:19-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #272
To: AIList@SRI-STRIPE.ARPA
AIList Digest Monday, 1 Dec 1986 Volume 4 : Issue 272
Today's Topics:
Bibliography - ai.bib45AB
----------------------------------------------------------------------
Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: ai.bib45AB
%T Toward a Computational Interpretation
of Situation Semantics
%A Yves Lesperance
%J Computational Intelligence
%V 2
%N 1
%D February 1986
%K AI16 AI02
%X Situation semantics proposes novel and attractive treatments for several
problem areas of natural language semantics, such as efficiency (context
sensitivity) and propositional attitude reports. Its focus on the information
carried by utterances makes the approach very promising for accounting for
pragmatic phenomena. However, situation semantics seems to oppose several
basic assumptions underlying current approaches to natural language processing
and the design of intelligent systems in general. It claims that efficiency
undermines the standard notions of logical form, entailment, and proof theory,
and objects to the view that mental processes necessarily involve internal
representations. The paper attempts to clarify these issues and discusses
the impact of situation semantics' criticisms for natural language processing,
knowledge representation, and reasoning. We claim that the representational
approach is the only practical one for the design of large intelligent systems,
but argue that the representations used should be efficient in order to account
for the systems embedding in its environment. We conclude by stating some
constraints that a computational interpretation of situation semantics should
obey and discussing remaining problems.
%T The Role of Native Grammars in
Correcting Errors in Second Language Learning
%A Ethel Schuster
%J Computational Intelligence
%V 2
%N 2
%D May 1986
%K AI02 AA07
%X
.ds VP \s-1VP\s0\*([.2\*(.]
This paper describes, \(*VP, a system that has been implemented to
tutor non-native speakers in English. This system differs from many tutoring
systems by employing an explicit grammar of its user's native language.
This grammar enables \*(VP to customize its responses by addressing
problems due to interference of the native language. The system focuses on the
acquisition of English verb-particle and verb-prepositional phrase
constructions. Its correction strategy is based upon comparison of the native
language grammar with an English grammar. \*(VP is a modular system: its
grammar of a user's native language can be easily replaced by a grammar of
another language. The problems and solutions presented in this paper are
related to the more general question of how modeling previous knowledge
facilitates instruction in a new skill.
%T \s-1COACH\s0: A Tutor Based on Active Schemas
%A Donald R. Gentner
%J Computational Intelligence
%V 2
%N 2
%D May 1986
%K AA07
%X
The \s-1COACH\s0 system, a computer simulation of a human tutor, was
constructed with the goal of obtaining a better understanding of how a tutor
interprets the student's behavior, diagnoses difficulties, and gives advice.
\s-1COACH\s0 gives advice to a student who is learning a simple computer
programming language. Its intelligence is based on a hierarchy of active
schemas that represent the tutor's general concepts, and on more specific
information represented in a semantic network. The coordination of
conceptually-guided and data-driven processing enables \s-1COACH\s0 to
interpret student behavior, recognize errors, and give advice to the student.
%T Formative Evaluation in the
Development and Validation of
Expert Systems in Education
%A Alan M. Hofmeister
%J Computational Intelligence
%V 2
%N 2
%D May 1986
%K AA07
%X
Researchers developing and validating educational products often expect
the same field-test activities to provide information on product improvement
and product effectiveness. For effective and economical use of resources,
these two goals, product improvement and product validation, must be stressed
at different times and with different tools and strategies. This article
identifies the difference in procedures and outcome between formative and
summative evaluation practices, and relates these practices to the development
and validation of expert systems in education.
%T The Design of the \s-1SCENT\s0 Automated Advisor
%A Gordon McCalla
%A Richard Bunt
%A Janelle Harms
%K AA07 T01 AT18
%X
The \s-1SCENT\s0 (Student Computing Environment) project is concerned with
building an intelligent tutoring system to help students debug their Lisp
programs. The major thrust of current \s-1SCENT\s0 investigations is into the
design of the \s-1SCENT\s0 advisor which is meant to provide debugging
assistance to novice students. Six conceptual levels constitute the advisor.
At the lowest level is the raw data'', consisting of the students (possibly
buggy) program. This can be interpreted by a program behaviour'' level which
can produce traces, cross-reference charts, etc., from the student's program.
These traces, etc., can be analyzed by observers'' for interesting patterns.
At the next level are strategy judges'' and diagnosticians'' which
determine which strategy the student has used in his or her program and bugs
in this strategy. A task expert'' provides task-specific input into the
process of analyzing the student's solution, and a student knowledge
component'' provides student-specific input into this process. Information
from the six levels interacts in a variety of ways and control is similarly
heterarchical. This necessitates a blackboard-style scheme to coordinate
information dissemination and control flow.
%T A Programming Language
for Learning Environments
%A J.I. Glasgow
%A L.J. Hendren
%A M.A. Jenkins
%J Computational Intelligence
%V 2
%N 2
%D May 1986
%X
Most of the recent research on programming languages for education has been
centered around the language Logo. In this paper we introduce another
candidate language for learning environments, Nial, the nested interactive
array language.
.LP
Nial is a general-purpose programming language based on a formal theory of
mathematics called array theory. This paper introduces Nial as a language for
learning programming and developing and using computer-aided instruction tools.
A comparison with Logo is provided to evaluate these two languages in terms
of their strengths and weaknesses as programming environments for novice
programmers. We also demonstrate that a programming environment can be both
simple to learn at the novice level and extendible to a powerful and
sophisticated language.
%T Student Modelling by an Expert System
in an Intelligent Tutoring System
%A Odile Palies
%A Michel Caillot,
%A Evelyne Cauzinille-Marmeche,
%A Jean-Louis Lauriere
%A Jacques Mathieu
%J Computational Intelligence
%V 2
%N 2
%D May 1986
%K Electre AA04 AA07
%X
\s-1ELECTRE\s0 is a project to build an intelligent tutoring system for
teaching basic electricity. This paper describes a student modeling based on
the student's cognitive processes. This model includes for each student,
his or her domain knowledge and the specific heuristics as well. Moreover, it
uses meta-knowledge of problem solving. This model is simulated by a
knowledge base which controls the solving processes by meta-rules. A case
study is presented.
%T Using knowledge generated in heuristic search
for non-chronological backtracking
%A Vasant Dhar
%J Computational Intelligence
%V 2
%N 3
%D August 1986
%K AI03
%X Problem solvers that use heuristics to guide choices often run into untenable
situations that can be characterized as over-constrained. When this happens,
the problem must be able to identify the right culprit from along its heuristic
choices in order to avoid a potentially explosive search. In this paper, we
present a solution to this for a certain class of problems where the
justifications associated with choice points involve an explicit assessment of
the pros and cons of choosing each alternative relative to its competitors. We
have designed a problem solver that accumulates such knowledge about the pros
and cons of alternative selections at choice points during heuristic search,
which it updates in light of an evolving problem situation. Whenever untenable
situations arise, this preserved knowledge is used in order to determine the
most appropriate backtracking point. By endowing the backtracker with access
to this domain-specific knowledge, a highly contextual approach to reasoning in
backtracking situations can be achieved.
%T Recognition algorithms
for the Connection Machine\*(TM
%A Anita M. Flynn
%A John G. Harris
%J Computational Intelligence
%V 2
%N 3
%D August 1986
%K H03 AI03 AI06
%X This paper describes an object recognition algorithm both on a sequential
machine and on a SIMD parallel processor such as the MIT Connection Machine.
The problem, in the way it is presently formulated on a sequential machine, is
essentially a propagation of constraints through a tree of possibilities in an
attempt to prune the tree to a small number of leaves. The tree can become
excessively large, however, and so implementations on massively parallel
machines are sought in order to speed up the problem. Two fast parallel
algorithms are described here.
.br
A static algorithm reformulates the problem by assigning every leaf in the
completely expanded unpruned tree to a separate processor in the Connection
Machine. Then pruning is done in nearly constant time by broadcasting
constraints to the entire SIMD array. This parallel version is shown to run
three to four orders of magnitude faster than the sequential version. For
large recognition problems which would exceed the capacity of the machine, a
dynamic algorithm is described which performs a series of loading and pruning
steps, dynamically allocating and deallocating processors through the use of
the Connection Machine's global router communications mechanism.
%T Parsing with restricted quantification:
An initial demonstration
%A Alan M. Frisch
%J Computational Intelligence
%V 2
%N 3
%D August 1986
%K AI03 AI02
%X The primary goal of this paper is to illustrate how smaller deductive search
spaces can be obtained by extending a logical language with restricted
quantification and tailoring an interface system to this extension. The
illustration examines the search spaces for a bottom-up parse of a sentence
with a series of four strongly equivalent grammars. The grammars are stated in
logical languages of increasing expressiveness, each restatement resulting in a
more concise grammar and a smaller search space.
.sp
A secondary goal is to point out an area where further research could
yield results useful to the design of efficient parsers, particularly for
grammatical formalisms that rely heavily on feature systems.
%T An explanation shell for expert systems
%A Leon Sterling
%A Marucha Lalee
%J Computational Intelligence
%V 2
%N 3
%D August 1986
%K T02 T03 AI01
%X We describe a shell for expert systems written in Prolog. The shell provides
a
consultation environment and a range of explanation capabilities. The design
of the shell is modular, making it very easy to extend the shell with extra
features required by a particular expert system. The novelty of the shell is
twofold. Firstly it has a uniform design consisting of an integrated
collection of meta-interpreters. Secondly, there is a new approach for
explaining `why not', when a query to the system fails.
%A Lisa L. Spiegelman
%T Object-Oriented Programming Language to Employ Windows Operating Environment
%J Infoworld
%D OCT 6, 1986
%V 8
%N 40
%P 10
%K H01 AT02
%X Actor is available from White Water Group for $495.00.
It is an object oriented programming language using exchange of information
between windows in Microsoft's Window environment. It is allegedly for
AI developers.
%A Priscilla M. Chabal
%T Extension Enables Image-Pro to Work In Protected Mode on 286 Machines
%J Infoworld
%D OCT 6, 1986
%V 8
%N 40
%P 31
%K AI06 H01 AT02
%X Image-Pro, an image processing software package for the IBM-PC, can
now make use of more than 640 K on 286 based computers.
%A David Bright
%T John Blankenbaker: Inventor of Kenbak-I
%J ComputerWorld
%V 20
%N 44
%D NOV 3, 1986
%P 173
%K H02
%X Biography of the designer of the production version of the Symbolics LISP mac
hine.
He also developed a $750.00 personal computer in 1971 but only sold
48 of them. It used several chips for the CPU. He also worked at Quotron.
%A Alan Alper
%T Researchers Focus on Promise of Eye-Gaze Technology
%J ComputerWorld
%V 20
%N 44
%D NOV 3, 1986
%K O01
%X discussion of the use of technology to read where the eye is looking to contr
ol
computers. Sentinent is selling a $3000.00 eye project c
IBM has a patent for an eye-tracking mechanism that was accurate
enough to control a computer.
Analytics is developing eye-gaze technology
to use in concert with voice recognition. They are accurate to less than
1 degree of arc.
They are also looking into the possibility of reading
a magnetoencephlograph for the same control purpose.
%A A. Terry Bahill
%A Wiliam R. Ferrell
%T Teaching an Introductory Course in Expert Systems
%J IEEE Expert
%D Winter 1986
%V 1
%N 4
%P 59-63
%K AA01 AT18
%X Lists various projects that students completed. One of the
most interesting was an autology expert system that was rated by a
real speech expert. The speech expert stated that the system had
correct rules but she could see what books she got them from.
It did give her insight in teaching methods and to what she was
actually doing.
It turned out that the students interviewed a resident rather than
a true \fIexpert\fR. The system used MI the MIT videotapes, the
M.1 shell instructor's package and lecture notes. The course
was rated very good on student evaluations.
%T Medical Applications
%J IEEE Expert
%D Winter 1986
%V 1
%N 4
%P 10-14
%K mycin puff emycin oncocin Cadeucius AI01 AA01 H01 Referee AI01 AA02 AA10 AI04
Rulemaster
%X Caduceus (used to be Internist) now proves more accurate than the average
physician, comparable to teams of physician and almost as good as expert
physicians asked to review the case in retrospect. Oncocin now is
comparable relative to physicians treating patients at Stanford.
Referee is being developed to help physicians judge medical studies.
Also being developed is an expert system for nuclear magnetic resonance
in determining protein molecule structures. The system is unusual in
that experts are not very good at this either.
.sp 1
Cedars-Sinai medical researchers are developing a system for
assisting cardiologists. The system is being developed from examples
using Radian.
%A Joseph Urban
%T Building Intelligence into Software Tools
%J IEEE Expert
%D Winter 1986
%V 1
%N 4
%P 21
%K AA08 AI01
%X intro to special issue on software engineering applications of expert
systems
%A I. Zulkerman
%A W. Tsai
%A D. Volovik
%T Expert Systems and Software Engineering: Ready for Marriage?
%J IEEE Expert
%D Winter 1986
%V 1
%N 4
%P 24-31
%K AA08 AI01
%X this article consisted of a summary of expert system technologies,
software engineering technologies. There was little
material in this article relating how to apply AI tools to software
engineering.
%A Mitchell D. Lubars
%A Mehdi T. Harandi
%T Intelligent Support for Software Specification and Design
%J IEEE Expert
%D Winter 1986
%V 1
%N 4
%P 33-41
%K AA08
%X describes a system to help develop dataflow diagrams (these are used
by many designers for specifying software systems).
It finds subparts to put into the system to perform various tasks
similar to the way KBEmacs finds pieces of code to instantiate
for various needs the user has. The system is integrated into
the Polylith system for configuration complicated software systems
and dataflow diagram analysis tools. The system has been working
on smaller examples.
%A Martin Herbert
%A Curt Hartog
%T MIS Rates the Issues
%J Datamation
%V NOV 15
%V 192
%N 32
%K AA06
%X reports result of surveying the MIS managers for Fortune 1000 companies.
They were asked to rank various issues ranging from 1 (unimportant) to
4 (most important). "Expert systems and Artificial Intelligence" was
rated the lowerst 2.21. To put this in perspective, here are some other
ratings:
.br
CIM, 2.25, Strategic Systems 2.42, Aligning MIS with Business Goals, 3.54,
'Telecommunications 3.17.
%T Parallel Processing Startup will Take on the Big Players
%J Electronic News
%D NOV 13, 1986
%V 59
%N 35
%P 21
%K AT02 H03 Dado Columbia tree
%X The Dado, a tree structure architecture, implemented at Columbia University
is being developed by a startup company. It consists of 16 to 64 68020
processors and will cost $90,000. It will work as an accelerator
in conjunction with a SUN work station.
%T Low-cost Camera Converts Photos to PC Images
%J Electronic News
%D NOV 13, 1986
%V 59
%N 35
%P 25
%K AI06 AT02
%X a $1200 camera that converts color images to digital data in a PC
is available from Ulie Research Labs.
%A Charles Cohen
%T Sensor Lets Robots Do Top-Quality Arc Welding
%J Electronic News
%D NOV 13, 1986
%V 59
%N 35
%P 43-45
%K AA26 AI06 AI07
%X describes new vision sensor for robots doing arc welding.
%T Integrated Artificial Intelligence System Tackles Newspaper Pagination
Challenge
%J Insight
%V 6
%N 9
%D NOV 1986
%P 3-5
%K AI01 H03 Composition Systems AI02
%X Description of Composition Systems commercial AI driven newspaper
layout system. The system consists of three cooperating modules
which can be bought separately or combined with the use of META to
resolve conflicts. The system allows natural language questions to
find out policies for newspaper layout that were previously entered
or to find the status of particular pages. The sytem allows automatic
entry of adds by customers with personal computers. Some of the policies
implemented are attempting to insure that coupons are not backed by
important material.
%A Jesse Victor
%T ANSI Display Management Aids Real-Time Imaging
%J Mini-Micro Systems
%D NOV 1986
%P 43-47
%V 19
%N 13
%K AT02 AI06 H01 datacube University of Lowell
Georges Grinstein
%X Datacube sells an ANSI display system standard system.
It supports 512 by 512 pixel systems. The system costs
$9000.00 dollars. It is anticipated that the software
will include a natural language interface, expert system
tools.
%A H. A. Simon
%T Whether Software Engineering Needs to be Artificially Intelligent
%J IEEE Transactions on Software Engineering
%D JUL 1986
%V SE-12
%N 7
%P 726-732
%K AA08 AT14
%A Allen T. Goldberg
%T Knowledge-Based Programming: A Survey of Program Design and Construction
Techniques
%J IEEE Transactions on Software Engineering
%D JUL 1986
%V SE-12
%N 7
%P 752-768
%K AT08 AA08
%A Priscilla M. Chabal
%T Firm Announces Expert System Environment for IBM RT Workstation
%J InfoWorld
%D SEP 29, 1986
%V 8
%N 39
%P 24
%K T01 AT02 H01
%X The GURU expert system is available for the IBM RT for
$17000.
%A Hank Bannister
%T Borland Introduces Turbo Prolog, Version 1.1
%J InfoWorld
%D SEP 29, 1986
%V 8
%N 39
%P 3
%K AT02 H01 T03 T02 AI02
%X Borland introduced Version 1.1 which adds speed improvements
to the compiler, interface to other languages, a natural language
parser and a sample expert system shell
%A Steven Burke
%T Lotus to Release Its Long-Awaited Human Language Add-on October 6
%J InfoWorld
%D SEP 29, 1986
%V 8
%N 39
%P 12
%K AT02 H01 AI02 AA15
%X Hal will be released on October 6 from Lotus which
is a natural language interface to Lotus 1-2-3
%A Ninamary Buba Maginnis
%T Publishers Await System
%J ComputerWorld
%D NOV 10, 1986
%V 20
%N 45
%P 19+
%K AT02 AI01 Composition Systems newspaper AA20
%X Describes an expert publishing system from Composition
systems which sells for $600,000 to $2,000,000 depending
upon the size of the newspaper.
.sp 1
Also describes Michael Stock who is the leader of the
effort. He plans to work in process control and is
a self-described workaholic. He plans to work in
process control expert systems including systems that
work with several loops at a time.
------------------------------
End of AIList Digest
********************
∂01-Dec-86 2313 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #273
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 1 Dec 86 23:13:13 PST
Date: Mon 1 Dec 1986 20:43-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #273
To: AIList@SRI-STRIPE.ARPA
AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 273
Today's Topics:
Bibliography - ai.bib42AB
----------------------------------------------------------------------
Date: WED, 10 oct 86 17:02:23 CDT
From: leff%smu@csnet-relay
Subject: ai.bib42AB
%A Ralph Grishman
%A Richard Kittredge
%T Analyzing Language in Restricted Domains
%I Lawrence Erlbaum Associates Inc.
%C Hillsdale, NJ
%K AI02 AA01
%D 1986
%X 0-89859-620-3 1986 264 pages $29.95
.TS
tab(~);
l l.
N.Sager~T{
Sublanguage: Linguistic Phenomenon, Computational Tool
T}
J. Lehrberger~Sublanguage Analysis
E. Fitzpatrick~T{
The Status of Telegraphic Sublanguages
T}
J. Bachenko
D. Hindle
J. R. Hobbs~Sublanguage and Knowledge
D. E. Walker~T{
The Use of Machine Readable Dictionaries in Sublanguage Analysis
T}
R. A. Amsler
C. Friedman~T{
Automatic Structuring of Sublanguage Information: Application to
Medical Narrative
T}
E. Marsh~T{
General Semantic Patterns in Different Sublanguages
T}
C. A. Montgomery~T{
A Sublanguage for Reporting and Analysis of Space Events
T}
B. C. Glover
T. W. Finin~T{
Constraining the Interpretation of Nominal Compounds in a Limited Context
T}
G. Dunham~T{
The Role of Syntax in the Sublanguage of Medical Diagnostic Statements
T}
J. Slocum~T{
How One Might Automatically Identify and Adapt to a Sublanguage
T}
L. Hirschman~T{
Discovering Sublanguage Structures
T}
.TE
%A Janet L. Kolodner
%A Christopher K. Riesbeck
%T Experience, Memory and Reasoning
%I Lawrence Erlbaum Associates Inc.
%C Hillsdale, NJ
%D 1986
%K AT15
%X 0-89859-664-0 1986 272 pages $29.95
.TS
tab(~);
l l.
R.Wilensky~T{
Knowledge Representation - A critique and a Proposal
T}
T{
R. H. Granger
.br
D. M. McNulty
T}~T{
Learning and Memory in Machines and Animals that
Accounts for Some Neurobiological Data
T}
T{
V. Sembugamoorthy
.br
B. Chandrasekaran
T}~T{
Functional Representation of Devices and Compilation of
Diagnostic Problem Solving Systems
T}
.TE
%T Recovering from Execution Errors in \s-1SIPE\s0
%A David E. Wilkins
%J Computational Intelligence
%V 1
%D 1985
%K AI07 AI09
%X
In real-world domains (a mobile robot is used as a motivating example), things
do not always proceed as planned. Therefore it is important to develop better
execution-monitoring techniques and replanning capabilities. This paper
describes the execution-monitoring and replanning capabilities of the
\s-1SIPE\s0 planning system. (\s-1SIPE\s0 assumes that new information to the
execution monitor is in the form of predicates, thus avoiding the difficult
problem of how to generate these predicates from information provided by
sensors.) The execution-monitoring module takes advantage of the rich
structure of \s-1SIPE\s0 plans (including a description of the plan rationale),
and is intimately connected with the planner, which can be called as a
subroutine. The major advantages of embedding the replanner within the
planning system itself are:
.IP 1.
The replanning module can take advantage of the efficient frame reasoning
mechanisms in \s-1SIPE\s0 to quickly discover problems and potential fixes.
.IP 2.
The deductive capabilities of \s-1SIPE\s0 are used to provide a reasonable
solution to the truth maintenance problem.
.IP 3.
The planner can be called as a subroutine to solve problems after the
replanning module has inserted new goals in the plan.
.LP
Another important contribution is the development of a general set of
replanning actions that will form the basis for a language capable of
specifying error-recovery operators, and a general replanning capability that
has been implemented using these actions.
%T Plan Parsing for Intended Response Recognition
in Discourse
%A Candace L. Sidner
%J Computational Intelligence
%V 1
%D 1985
%K Discourse task-oriented dialogues intended meaning AI02
speaker's plans discourse understanding plan parsing discourse markers
%X
In a discourse, the hearer must recognize the response intended by the speaker.
To perform this recognition, the hearer must ascertain what plans the speaker
is undertaking and how the utterances in the discourse further that plan. To do
so, the hearer can parse the initial intentions (recoverable from the
utterance) and recognize the plans the speaker has in mind and intends the
hearer to know about. This paper reports on a theory of parsing the intentions
in discourse. It also discusses the role of another aspect of discourse,
discourse markers, that are valuable to intended response recognition.
%T Knowledge Organization and its Role
in Representation and Interpretation for
Time-Varying Data: The \s-1ALVEN\s0 System
%A John K. Tsotsos
%J Computational Intelligence
%V 1
%D 1985
%K Knowledge Representation, Expert Systems, Medical Consultation
Systems, Time-Varying Interpretation, Knowledge-Based Vision. AI01 AI06 AA01
%X
The so-called first generation'' expert systems were rule-based and offered a
successful framework for building applications systems for certain kinds of
tasks. Spatial, temporal and causal reasoning, knowledge abstractions, and
structuring are among topics of research for second generation'' expert
systems.
.sp
It is proposed that one of the keys for such research is \fIknowledge
organization\fP. Knowledge organization determines control structure design,
explanation and evaluation capabilities for the resultant knowledge base, and
has strong influence on system performance. We are exploring a framework for
expert system design that focuses on knowledge organization for a specific
class of input data, namely, continuous, time-varying data (image sequences or
other signal forms). Such data is rich in temporal relationships as well as
temporal changes of spatial relations and is thus a very appropriate testbed
for studies involving spatio-temporal reasoning. In particular, the
representation facilitates and enforces the semantics of the organization of
knowledge classes along the relationships of generalization / specification,
decomposition / aggregation, temporal precedence, instantiation, and
expectation-activated similarity.
.sp
A hypothesize-and-test control structure is
driven by the class organizational principles, and includes several interacting
dimensions of research (data-driven, model-driven, goal-driven temporal, and
failure-driven search). The hypothesis ranking scheme is based on temporal
cooperative computation with hypothesis fields of influence'' being defined
by the hypotheses' organizational relationships. This control structure has
proven to be robust enough to handle a variety of interpretation tasks for
continuous temporal data.
.sp
A particular incarnation, the \s-1ALVEN\s0 system, for left ventricular
performance assessment from X-ray image sequences, will be highlighted in this
paper.
%T On the Adequacy of Predicate Circumscription
for Closed-World Reasoning
%A David W. Etherington
%A Robert E. Mercer
%A Raymond Reiter
%J Computational Intelligence
%V 1
%D 1985
%K AI15 AI16
%X We focus on McCarthy's method of predicate circumscription in order to
establish various results about its consistency, and about its ability to
conjecture new information. A basic result is that predicate circumscription
cannot account for the standard kinds of default reasoning. Another is that
predicate circumscription yields no new information about the equality
predicate. This has important consequences for the unique names and domain
closure assumptions.
%T What is a Heuristic?
%A Je\(ffry Francis Pelletier and Marc H.J. Romanycia
%J Computational Intelligence
%V 1
%N 2
%D MAY 1985
%K AI16
%X From the mid-1950's to the present, the notion of a heuristic has
played a crucial role in AI researchers' descriptions of their
work. What has not been generally noticed is that different
researchers have often applied the term to rather different aspects
of their programs. Things that would be called a heuristic by
one researcher would not be so called by others. This is because
many heuristics embody a variety of different features, and the
various researchers have emphasized different ones of these
features as being essential to being a heuristic. This paper
steps back from any particular research programme and investigates
the question of what things, historically, have been thought to be
central to the notion of a heuristic, and which ones conflict with
others. After analyzing the previous definitions and examining
current usage of the term, a synthesizing definition is provided.
The hope is that with this broader account of `heuristic' in hand,
researchers can benefit more fully from the insights of others,
even if those insights are couched in a somewhat alien vocabulary.
%T Analysis by Synthesis in Computational Vision
with Application to Remote Sensing
%A Robert Woodham
%A E. Catanzariti
%A Alan Mackworth
%J Computational Intelligence
%V 1
%N 2
%D MAY 1985
%K AI06
%X
The problem in vision is to determine surface properties from
image properties. This is difficult because the problem, formally
posed, is underconstrained. Methods that infer scene properties
from image properties make assumptions about how the world
determines what we see. In this paper, some of these assumptions
are dealt with explicitly, using examples from remote sensing.
Ancillary knowledge of the scene domain, in the form of a digital
terrain model and a ground cover map, is used to synthesize an
image for a given date and time. The synthesis process assumes
that surface material is lambertian and is based on simple models of
direct sun illumination, diffuse sky illumination and atmospheric path
radiance. Parameters of the model are estimated from the real image.
A statistical comparison of the real image and the synthetic image
is used to judge how well the model represents the mapping from
scene domain to image domain.
.sp 1
The methods presented for image synthesis are similar to those
used in computer graphics. The motivation, however is different.
In graphics, the goal is to produce an effective rendering of the
scene domain. Here, the goal is to predict properties of real
images. In vision, one must deal with a confounding of effects
due to surface shape, surface material, illumination, shadows
and atmosphere. These effects often detract from, rather than
enhance, the determination of invariant scene characteristics.
%T A Functional Approach to Non-Monotonic Logic
%A Erik Sandewall
%J Computational Intelligence
%V 1
%N 2
%D MAY 1985
%K AI15 AI16
%X
Axiom sets and their extensions are viewed as functions from the
set of formulas in the language, to a set of four truth-values \fIt\fP,
\fIf\fP, \fIu\fP for undefined, and \fIk\fP for contradiction. Such functions
form a lattice with `contains less information' and the partial
order \(ib, and `combination of several sources of knowledge' as the
least-upper-bound operation \(IP.
We demonstrate the relevance of this
approach by giving concise proofs for some previously known results
about normal default rules. For non-monotonic rules in general
(not only normal default rules), we define a stronger version of the
minimality requirement on consistent fixpoints, and prove that
it is sufficient for the existence of a derivation of the fixpoint.
%J Computational Intelligence
%V 1
%N 3-4
%D August 1985
%T Generating paraphrases from
meaning-text semantic networks
%A Michel Boyer
%A Guy Lapalme
%K T02
%X
This paper describes a first attempt to base a paraphrase generation system
upon Mel'cuk and Zolkovskij's linguistic Meaning-Text (\s-1MT\s0) model whose
purpose is to establish correspondences between meanings, represented by
networks, and (ideally) all synonymous texts having this meaning. The system
described in the paper contains a Prolog implementation of a small explanatory
and combinatorial dictionary (the \s-1MT\s0 lexicon) and, using unification
and backtracking, generates from a given network the sentences allowed by the
dictionary and the lexical transformations of the model. The passage from the
net to the final texts is done through a series of transformations of
intermediary structures that closely correspond to \s-1MT\s0 utterance
representations (semantic, deep-syntax, surface-syntax and morphological
representations). These are graphs and trees with labeled arcs. The Prolog
unification (equality predicate) was extended to extract information from these
representations and build new ones. The notion of utterance path, used by many
authors, is replaced by that of covering by defining subnetworks''.
%T Spatiotemporal inseparability in early vision:
Centre-surround models and velocity selectivity
%A David J. Fleet
%A Allan D. Jepson
%J Computational Intelligence
%V 1
%N 3-4
%D August 1985
%K AI08 AI06
%X
Several computational theories of early visual processing, such as Marr's
zero-crossing theory, are biologically motivated and based largely on the
well-known difference of Gaussians (\s-1DOG\s0) receptive field model of early
retinal processing. We examine the physiological relevance of the \s-1DOG\s0,
particularly in the light of evidence indicating significant spatiotemporal
inseparability in the behaviour of retinal cell type.
.LP
From the form of the inseparability we find that commonly accepted functional
interpretations of retinal processing based on the \s-1DOG\s0, such as the
Laplacian of a Gaussian and zero-crossings, are not valid for time-varying
images. In contrast to current machine-vision approaches, which attempt to
separate form and motion information at an early stage, it appears that this is
not the case in biological systems. It is further shown that the qualitative
form of this inseparability provides a convenient precursor to the extraction
of both form and motion information. We show the construction of efficient
mechanisms for the extraction of orientation and 2-D normal velocity through
the use of a hierarchical computational framework. The resultant mechanisms
are well localized in space-time, and can be easily tuned to various degrees of
orientation and speed specificity.
%T A theory of schema labelling
%A William Havens
%J Computational Intelligence
%V 1
%N 3-4
%D August 1985
%K AI16 AI06 AA04
%X
Schema labelling is a representation theory that focuses on composition and
specialization as two major aspects of machine perception. Previous research
in computer vision and knowledge representation have identified computational
mechanisms for these tasks. We show that the representational adequacy of
schema knowledge structures can be combined advantageously with the constraint
propagation capabilities of network consistency techniques. In particular,
composition and specialization can be realized as mutually interdependent
cooperative processes which operate on the same underlying knowledge
representation. In this theory, a schema is a generative representation for
a class of semantically related objects. Composition builds a structural
description of the scene from rules defined in each schema. The scene
description is represented as a network consistency graph which makes
explicit the objects found in the scene and their semantic relationships.
The graph is hierarchical and describes the input scene at varying levels
of detail. Specialization applies network consistency techniques to refine
the graph towards a global scene description. Schema labelling is being used
for interpretating hand-printed Chinese characters, and for recognizing
\s-1VLSI\s0 circuit designs from their mask layouts.
%T Hierarchical arc consistency:
Exploring structured domains
in constraint satisfaction problems
%A Alan K. Mackworth
%A Jan A. Mulder
%A William S. Havens
%J Computational Intelligence
%V 1
%N 3-4
%D August 1985
%K AI03 AI16 AI06
%X
Constraint satisfaction problems can be solved by network consistency
algorithms that eliminate local inconsistencies before constructing global
solutions. We describe a new algorithm that is useful when the variable
domains can be structured hierarchically into recursive subsets with common
properties and common relationships to subsets of the domain values for related
variables. The algorithm, \s-1HAC\s0, uses a technique known as hierarchical
arc consistency. Its performance is analyzed theoretically and the conditions
under which it is an improvement are outlined. The use of \s-1HAC\s0 in a
program for understanding sketch maps, Mapsee3, is briefly discussed and
experimental results consistent with the theory are reported.
%T Expression of Syntactic and Semantic Features
in Logic-Based Grammars
%A Patrick Saint-Dizier
%J Computational Intelligence
%V 2
%N 1
%D February 1986
%K AI02
%X In this paper we introduce and motivate a formalism to represent syntactic
and semantic features in logic-based grammars. We also introduce technical
devices to express relations between features and inheritance mechanisms.
This leads us to propose some extensions to the basic unification mechanism
of Prolog. Finally, we consider the problem of long-distance dependency
relations between constituents in Gapping Grammar rules from the point of
view of morphosyntatic features that may change depending on the position
occupied by the moved'' constituents. What we propose is not a new
linguistic theory about features, but rather a formalism and a set of tools
that we think to be useful to grammar writers to describe features and their
relations in grammar rules.
%T Natural Language Understanding and
Theories of Natural Language Semantics
%A Per-Kristian Halvorsen
%J Computational Intelligence
%V 2
%N 1
%D February 1986
%K AI02
%X
In these short remarks, I examine the connection between Montague grammar, one
of the most influential theories of natural language semantics during the past
decade, and natural language understanding, one of the most recalcitrant
problems in \(*AI and computational linguistics for more than the last decade.
When we view Montague grammar in light of the requirements of a theory
natural language understanding, new traits become prominent, and highly touted
advantages of the approach become less significant. What emerges is a new
set of criteria to apply to theories of natural language understanding. Once
one has this measuring stick in hand, it is impossible to withstand the
temptation of also applying it to the emerging contender to Montague grammar
as a semantic theory, namely situation semantics.
%T Unrestricted Gapping Grammars
%A Fred Popowich
%J Computational Intelligence
%V 2
%N 1
%D February 1986
%K AI02
%X
Since Colmerauer's introduction of metamorphosis grammars (MGs), with
their associated type \fIO\fP\(milike grammar rules, there has been a desire
to allow more general rule formats in logic grammars. Gap symbols were added
to the MG rule by Pereria, resulting in extraposition grammars (XGs).
Gaps, which are referenced by gap symbols, are sequences of zero or more
unspecified symbols which may be present anywhere in a sentence or in a
sentential form. However, XGs imposed restrictions on the position of gap
symbols and on the contents of gaps. With the introduction of gapping
grammars (GGs) by Dahl, these restrictions were removed, but the rule was
still required to possess a nonterminal symbol as the first symbol on the
left-hand side. This restriction is removed with the introduction of
unrestricted gapping grammars. FIGG, a Flexible Implementation of Gapping
Grammars, possesses a bottom-up parser which can process a large subset of
unrestricted GGs for describing phenomena of natural languages such as free
word order, and partially free word or constituent order. It can also be used
as a programming language to implement natural language systems which are
based on grammars (or metagrammars) that use the gap concept, such
as Gazdar's generalized phrase structure grammars.
------------------------------
End of AIList Digest
********************
∂02-Dec-86 0114 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #274
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 2 Dec 86 01:14:06 PST
Date: Mon 1 Dec 1986 20:47-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #274
To: AIList@SRI-STRIPE.ARPA
AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 274
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories &
Turing Tests and Chinese Rooms
----------------------------------------------------------------------
Date: 26 Nov 86 12:41:50 GMT
From: cartan!rathmann@ucbvax.Berkeley.EDU (the late Michael Ellis)
Subject: Re: Searle, Turing, Symbols, Categories
> Steve Harnad >> Keith Dancey
>> [The turing test] should be timed as well as checked for accuracy...
>> Turing would want a degree of humor...
>> check for `personal values,' `compassion,'...
>> should have a degree of dynamic problem solving...
>> a whole body of psychometric literature which Turing did not consult.
>
>I think that these details are premature and arbitrary. We all know
>(well enough) what people can DO: They can discriminate, categorize,
>manipulate, identify and describe objects and events in the world, and
>they can respond appropriately to such descriptions.
Just who is being arbitrary here? Qualities like humor, compassion,
artistic creativity and the like are precisely those which many of us
consider to be those most characteristic of mind! As to the
"prematurity" of all this, you seem to have suddenly and most
conveniently forgotten that you were speaking of a "total turing
test" -- I presume an ultimate test that would encompass all that we
mean when we speak of something as having a "mind", a test that is
actually a generations-long research program.
As to whether or not "we all know what people do", I'm sure our
cognitive science people are just *aching* to have you come and tell
them that us humans "discriminate, categorize, manipulate, identify, and
describe". Just attach those pretty labels and the enormous preverbal
substratum of our consciousness just vanishes! Right? Oh yeah, I suppose
you provide rigorous definitions for these terms -- in your as
yet unpublished paper...
>Now let's get devices to (1) do it all (formal component) and then
>let's see whether (2) there's anything that we can detect informally
>that distinguishes these devices from other people we judge to have
>minds BY EXACTLY THE SAME CRITERIA (namely, total performance
>capacity). If not, they are turing-indistinguishable and we have no
>non-arbitrary basis for singling them out as not having minds.
You have an awfully peculiar notion of what "total" and "arbitrary"
mean, Steve: its not "arbitrary" to exclude those traits that most
of us regard highly in other beings whom we presume to have minds.
Nor is it "arbitrary" to exclude the future findings of brain
research concerning the nature of our so-called "minds". Yet you
presume to be describing a "total turing test".
May I suggest that what you describing is not a "test for mind", but
rather a "test for simulated intelligence", and the reason you will
not or cannot distinguish between the two is that you would elevate
today's primitive state of technology to a fixed methodological
standard for future generations. If we cannot cope with the problem,
why, we'll just define it away! Right? Is this not, to paraphrase
Paul Feyerabend, incompetence upheld as a standard of excellence?
-michael
Blessed be you, mighty matter, irresistible march of evolution,
reality ever new born; you who by constantly shattering our mental
categories force us to go further and further in our pursuit of the
truth.
-Pierre Teilhard de Chardin "Hymn of the Universe"
------------------------------
Date: 27 Nov 86 12:02:50 GMT
From: cartan!rathmann@ucbvax.Berkeley.EDU (the late Michael Ellis)
Subject: Re: Turing Tests and Chinese Rooms
> Ray Trent
> 1) I've always been somewhat suspicious about the Turing Test. (1/2 :-)
>
> a) does anyone out there have any good references regarding
> its shortcomings. :-|
John Searle's notorious "Chinese Room" argument has probably
drawn out more discussion on this topic in recent times than
anything else I can think of. As far as I can tell, there seems
to be no consensus of opinion on this issue, only a broad spectrum
of philosophical stances, some of them apparently quite angry
(Hofstadter, for example). The most complete presentation I have yet
encountered is in the journal for the Behavioral and Brain Sciences
1980, with a complete statement of Searle's original argument,
responses by folks like Fodor, Rorty, McCarthy, Dennett, Hofstadter,
Eccles, etc, and Searle's counterresponse.
People frequently have misconceptions of just what Searle is arguing,
the most common of these being:
Machines cannot have minds.
What Searle really argues is that:
The relation (mind:brain :: software:hardware) is fallacious.
Computers cannot have minds solely by virtue of their running the
correct program.
His position seems to derive from his thoughts in the philosophy of
language, and in particular his notion of Intentionality.
Familiarity with the work of Frege, Russell, Wittgenstein, Quine,
Austin, Putnam, and Kripke would really be helpful if you are
interested in the motivation behind this concept, but Searle
maintains that his Chinese room argument makes sense without any of
that background.
-michael
------------------------------
Date: 29 Nov 86 06:52:21 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
Peter O. Mikes <mordor!pom> at S-1 Project, LLNL wrote:
> An example of ["unexperienced experience"] is subliminal perception.
> Similar case is perception of outside world during
> dream, which can be recalled under hypnosis. Perception
> is not same as experience, and sensation is an ambiguous word.
Subliminal perception can hardly serve as a clarifying example since
its own existence and nature is anything but clearly established.
(See D. Holender (1986) "Semantic activation without conscious
identification," Behavioral and Brain Sciences 9: 1 - 66.) If subliminal
perception exists, the question is whether it is just a case of dim or
weak awareness, quickly forgotten, or the unconscious registration of
information. If it is the former, then it is merely a case of a weak
and subsequently forgotten conscious experience. If it is the latter,
then it is a case of unconscious processing -- one of many, for most
processes is unconscious (and studying them is the theoretical burden of
cognitive science).
Dreaming is a similar case. It is generally agreed (from studies in
which subjects are awakened during dreams) that subjects are conscious
during their dreams, although they remain asleep. This state is called
"paradoxical sleep," because the EEG shows signs of active, waking
activity even though the subject's eyes are closed and he continues to
sleep. Easily awakened in that stage of sleep, the subject can report
the contents of his dream, and indicates that he has been consciously
undergoing the experience, like a vivid day-dream or a hallucination.
If the subject is not awakened, however, the dream is usually
forgotten, and difficult if not impossible to recall. (As usual,
recognition memory is stronger than recall, so sometimes cues will be
recognized as having occurred in a forgotten dream.) None of this
bears on the issue of consciousness, since the consciousness during
dreams is relatively unproblematic, and the only other phenomenon
involved is simply the forgetting of an experience.
A third hypothetical possibility is slightly more interesting, but,
unfortunately, virtually untestable: Can there be unconscious
registration of information at time T, and then, at a later time, T1,
conscious recall of that information AS IF it had been experienced
consciously at T? This is a theoretical possibility. It would still
not make the event at T a conscious experience, but it would mean that
input information can be put on "hold" in such a way as to be
retrospectively experienced at a later time. The later experience
would still be a kind of illusion, in that the original event was NOT
actually experienced at T, as it appears to have been upon
reflection. The nervous system is probably playing many temporal (and
causal) tricks like that within very short time intervals; the question
only becomes dramatic when longer intervals (minutes, hours, days) are
interposed between T and T1.
None of these issues are merely definitional ones. It is true that
"perception" and "sensation" are ambiguous, but, fortunately,
"experience" seems to be less so. So one may want to separate
sensations and perceptions into the conscious and unconscious ones.
The conscious ones are the ones that we were consciously aware of
-- i.e., that we experienced -- when they occurred in real time. The
unconscious ones simply registered information in our brains at their
moment of real-time occurrence (without being experienced), and
the awareness, if any, came only later.
> suggest that we follow the example of acoustics, which solved the
> 'riddle' of falling tree by defining 'sound' as physical effect
> (density wave) and noise as 'unwanted sound' - so that The tree
> which falls in deserted place makes sound but does not make noise.
> Accordingly, perception can be unconcious but experience can't.
Based on the account you give, acoustics solved no problem. It merely
missed the point.
Again, the issue is not a definitional one. When a tree falls, all you
have is acoustic events. If an organism is nearby, you have acoustic
events and auditory events (i.e., physiological events in its nervous
system). If the organism is conscious, it hears a sound. But, unless
you are that organism, you can't know for sure about that. This is
called the mind/body problem. "Noise" and "unwanted sound" has
absolutely nothing to do with it.
> mind and consciousness (or something like that) should be a universal
> quantity, which could be applied to machine, computers...
> Since we know that there is no sharp division between living and
> nonliving, we should be able to apply the measure to everything
We should indeed be able to apply the concept conscious/nonconscious
to everything, just as we can apply the concept living/nonliving. The
question, however, remains: What is and what isn't conscious? And how are
we to know it? Here are some commonsense things to keep in mind. I
know of only one case of a conscious entity directly and with
certainty: My own. I infer that other organisms that behave more or
less the way I would are also conscious, although of course I can't be
sure. I also infer that a stone is not conscious, although of course I
can't be sure about that either. The problem is finding a basis for
making the inference in intermediate cases. Certainty will not be
possible in any case but my own. I have argued that the Total Turing
Test is a reasonable empirical criterion for cognitive science and a
reasonable intuitive criterion for the rest of us. Moreover, it has
the virtue of corresponding to the subjectively compelling criterion
we're already using daily in the case of all other minds but our own.
--
Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
------------------------------
End of AIList Digest
********************
∂02-Dec-86 0308 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #275
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 2 Dec 86 03:07:59 PST
Date: Mon 1 Dec 1986 20:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #275
To: AIList@SRI-STRIPE.ARPA
AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 275
Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 28 Nov 86 06:27:20 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan
Harnad)
Subject: Re: Searle, Turing, Symbols, Categories
Lambert Meertens (lambert@boring.uucp) of CWI, Amsterdam, writes:
> for me it is not the case that I perceive/experience/
> am-directly-aware-of my performance being caused by anything.
> It just happens.
Phenomenology is of course not something it's easy to settle
disagreements about, but I think I can say with some confidence that
most people experience their (voluntary) behavior as caused by THEM.
My point about free will's being an illusion is a subtler one. I am
not doubting that we all experience our voluntary actions as freely
willed by ourselves. That EXPERIENCE is certainly real, and no
illusion. What I am doubting is that our will is actually the cause of our
actions, as it seems to be. I think our actions are caused by our
brain activity (and its causes) BEFORE we are aware of having willed
them, and that our experience of willing and causing them involves a
temporal illusion (see S. Harnad [1982] "Consciousness: An afterthought,"
Cognition and Brain Theory 5: 29 - 47, and B. Libet [1986]
"Unconscious cerebral initiative and the role of conscious will in
voluntary action," Behavioral and Brain Sciences 8: 529 - 566.)
Of course, my task of supporting this position would be much easier if
the phenomenology you describe were more prevalent...
> How do I know I have a mind?... The problem is that if you
> look up "mind" in an English-Dutch dictionary, some eight
> translations are suggested.
The mind/body problem is not just a lexical one; nor can it be settled by
definitions. The question "How do I know I have a mind?" is synonymous
with the question "How do I know I am experiencing anything at all
[now, rather than just going through the motions AS IF I were having
experience, but in fact being only an insentient automaton]?"
And the answer is: By direct, first-hand experience.
> "Consciousness" is more like "appetite"... How can we know for
> sure that other people have appetites as well?... "Can machines
> have an appetite?"
I quite agree that consciousness is like appetite. Or, to put it more
specifically: If consciousness is the ability to have (or the actual
having of) experience in general, appetite is a particular experience
most conscious subjects have. And, yes, the same questions that apply to
consciousness in general apply to appetite in particular. But I'm
afraid that this conclusion was not your objective here...
> Now why is consciousness "real", if free will is an illusion?
> Or, rather, why should the thesis that consciousness is "real"
> be more compelling than the analogous thesis for free will?
> In either case, the essential argument is: "Because I [the
> proponent of that thesis] have direct, immediate, evidence of it."
The difference is that in the case of the (Cartesian) thesis of the
reality of consciousness (or mind) the question is whether there is
any qualitative, subjective experience going on AT ALL, whereas in the
case of the thesis of the reality of free will the question is whether
the dictates of a particular CONTENT of experience (namely, the causal
impression it gives us) is true of the world. The latter, like the
existence of the outside world itself, is amenable to doubt. But the former,
namely, THAT we are experiencing anything at all, is not open to doubt,
and is settled by the very act of experiencing something. That is the
celebrated Cartesian Cogito.
> Sometimes we are conscious of certain sensations. Do these
> sensations disappear if we are not conscious of them? Or do they go
> on on a subconscious level? That is like the question "If a falling
> tree..."
The following point is crucial to a coherent discussion of the
mind/body problem: The notion of an unconscious sensation (or, more
generally, an unconscious experience) is a contradiction in terms!
[Test it in the form: "unexperienced experience." Whatever might that
mean? Don't answer. The Viennese delegation (as Nabokov used to call
it) has already made almost a century's worth of hermeneutic hay with the
myth of the "subconscious" -- a manifest nonsolution to the mind/body
problem that simply consisted of multiplying the mystery by two. The problem
isn't the unconscious causation of behavior: If we were all
unconscious automata there would be no mind/body problem. The problem
is conscious experience. And anthropomorphizing the sizeable portion
of our behavior that we DON'T have the illusion of being the cause of
is not only no solution to the mind/body problem but not even a
contribution to the problem of finding the unconscious causes of
behavior -- which calls for cognitive theory, not hermeneutics.]
It would be best to stay away from the usually misunderstood and
misused problem of the "unheard sound of the falling tree." Typically
used to deride philosophers, the unheard last laugh is usually on the derider.
> Let us agree that the sensations continue at least if it can be
> shown that the person involved keeps behaving as if the concomitant
> sensations continued, even though professing in retrospection not
> to have been aware of them. So people can be afraid without
> realizing it, say, or drive a car without being conscious of the
> traffic lights (and still halt for a red light).
I'm afraid I can't agree with any of this. A sensation may be experienced and
then forgotten, and then perhaps again remembered. That's unproblematic,
but that's not the issue here, is it? The issue is either (1)
unexperienced sensations (which I suggest is a completely incoherent
notion) or (2) unconsciously caused or guided behavior. The latter is
of course the category most behavior falls into. So unconscious
stopping for a red light is okay; so is unconscious avoidance or even
unconscious escape. But unconscious fear is another matter, because
fear is an experience, not a behavior (and, as I've argued, the
concept of an unconscious experience is self-contradictory).
If I may anticipate what I will be saying below: You seem to have
altogether too much intuitive confidence in the explanatory
power of the concept and phenomenology of memory in your views on the
mind/body problem. But the problem is that of immediate, ongoing
qualitative experience. Anything else -- including the specifics of the
immediate content of the experience (apart from the fact THAT it is an
experience) and its relation to the future, the past or the outside
world -- is open to doubt and is merely a matter of inference, rather
than one of direct, immediate certainty in the way experiential matters
are. Hence whereas veridical memories and continuities may indeed happen
to be present in our immediate experiences, there is no direct way that
we can know that they are in fact veridical. Directly, we know only
that they APPEAR to be veridical. But that's how all phenomenological
experience is: An experience of how things appear. Sorting out what's
what is an indirect, inferential matter, and that includes sorting out
the experiences that I experience correctly as remembered from those
that are really only "deja vu." (This is what much of the writing on
the problem of the continuity of personal identity is concerned with.)
> Maybe everything is conscious. Maybe stones are conscious...
> Their problem is, they can hardly tell us. The other problem is,
> they have no memory... They are like us with that traffic light...
> Even if we experience something consciously, if we lose all
> remembrance of it, there is no way in which we can tell for sure
> that there was a conscious experience. Maybe we can infer
> consciousness by an indirect argument, but that doesn't count.
> Indirect evidence can be pretty strong, but it can never give
> certainty. Barring false memories, we can only be sure if we
> remember the experience itself.
Stones have worse problems than not being able to tell us they're
conscious and not being able to remember. And the mind/problem is not
solved by animism (attributing conscious experience to everything); It
is merely compounded by it. The question is: Do stones have
experiences? I rather doubt it, and feel that a good part of the M/B
problem is sorting out the kinds of things that do have experiences from
the kinds of things, like stones, that do not (and how, and why,
functionally speaking).
If we experience something, we experience it consciously. That's what
"experience" means. Otherwise it just "happens" to us (e.g., when we're
distracted, asleep, comatose or dead), and then we may indeed be like the
stone (rather than vice versa). And if we forget an experience, we
forget it. So what? Being conscious of it does not consist in or
depend on remembering it, but on actually experiencing it at the time.
The same is true of remembering a previously forgotten experience:
Maybe it was so, maybe it wasn't. The only thing we are directly
conscious of is that we experience it AS something remembered.
Inference may be involved in trying to determine whether or not a
memory is veridical, but it is certainly not involved in determining
THAT I am having any particular conscious experience. That fact is
ascertained directly. Indeed it is the ONLY fact of consciousness, and
it is immediate and incorrigible. The particulars of its content, on
the other hand -- what an experience indicates about the outside world, the
past, the future, etc. -- are indirect, inferential matters. (To put
it another way, there is no way to "bar false memories." Experiences
wear their experientiality on their ears, so to speak, but all of the
rest of their apparel could be false, and requires inference for
indirect confirmation.)
> If some things we experience do not leave a recallable trace, then
> why should we say that they were experienced consciously? Or, why
> shouldn't we maintain the position that stones are conscious
> as well?... More useful, then, to use "consciousness" only for
> experiences that are, somehow, recallable.
These stipulations would be arbitrary (and probably false). Moreover,
they would simply fail to be faithful to our direct experience -- to
"what it's like" to have an experience. The "recallability" criterion
is a (weak) external one we apply to others, and to ourselves when
we're wondering whether or not something really happened. But when
we're judging whether we're consciously experiencing a tooth-ache NOW,
recallability has nothing to do with it. And if we forget the
experience (say, because of subsequent anesthesia) and never recall it
again, that would not make the original experience any less conscious.
> the things that go on in our heads are stored away: in order to use for
> determining patterns, for better evaluation of the expected outcome of
> alternatives, for collecting material that is useful for the
> construction or refinement of the model we have of the outside world,
> and so on.
All these conjectures about the functions of memory and other
cognitive processes are fine, but they do not provide (nor can they
provide) the slightest hint as to why all these functional and
behavioral objectives are not simply accomplished UNconsciously. This
shows as graphically as anything how the mind/body problem is
completely bypassed by such functional considerations. (This is also
why I have been repeatedly recommending "methodological
epiphenomenalism" as a research strategy in cognitive modeling.)
> Imagine now a machine programmed to "eat" and also to keep up
> some dinner conversation... IF hunger THEN eat... equipped with
> a conflict-resolution module... dinner-conversation module...
> Speaking anthropomorphically, we would say that the machine is
> feeling uneasy... apology submodule... PROBABLE CAUSE OF eat
> IS appetite... "<<SELF, having, appetite>... <goodness, 0.6785>>"
> How different are we from that machine?
On the information you give here, the difference is likely to be like
night and day. What you have described is a standard anthropomorphic
interpretation of simple symbol-manipulations. Overzealous AI workers
do it all the time. What I believe is needed is not more
over-interpretation of the pathetically simple toy tricks that current
programs can perform, but an effort to model life-size performance
capacity: The Total Turing Test. That will diminish the degrees of
freedom of the model to the size of the normal underdetermination of
scientific theories by their data, and it will augment the problem of
machine minds to the size of the other-minds problem, with which we
are already dealing daily by means of the TTT.
In the process of pursuing that distant scientific goal, we may come to
know certain constraints on the enterprise, such as: (1) Symbol-manipulation
alone is not sufficient to pass the TTT. (2) The capacity to pass the TTT
does not arise from a mere accretion of toy modules. (3) There is no autonomous
symbolic macromodule or level: Symbolic representations must be grounded in
nonsymbolic processes. And if methodological epiphenomenalism is
faithfully adhered to, the only interpretative question we will ever need
to ask about the mind of the candidate system will be precisely the
same one we ask about one another's minds; and it will be answered on
precisely the same basis as the one we use daily in dealing with the
other-minds problem: the TTT.
> if we ponder a question consciously... I think the outcome is not
> the result of the conscious process, but, rather, that the
> consciousness is a side-effect of the conflict-resolution
> process going on. I think the same can be said about all "conscious"
> processes. The process is there, anyway; it could (in principle) take
> place without leaving a trace in memory, but for functional reasons
> it does leave such a trace. And the word we use for these cognitive
> processes that we can recall as having taken place is "conscious".
Again, your account seems to be influenced by certain notions, such as
memory and "conflict-resolution," that appear to be carrying more intuitive
weight than they can bear. Not only is the issue not that of "leaving
a trace" (as mentioned earlier), but there is no real functional
argument here for why all this shouldn't or couldn't be accomplished
unconsciously. [However, if you substitute for "side-effect" the word
"epiphenomenon," you may be calling things by their proper name, and
providing (inadevertently) a perfectly good rationale for ignoring them
in trying to devise a model to pass the TTT.]
> it is functional that I can raise my arm by "willing" it to raise,
> although I can use that ability to raise it gratuitously. If the
> free will here is an illusion (which I think is primarily a matter
> of how you choose to define something as elusive as "free will"),
> then so is the free will to direct your attention now to this,
> then to that. Rather than to say that free will is an "illusion",
> we might say that it is something that features in the model
> people have about "themselves". Similarly, I think it is better to say
> that consciousness is not so much an illusion, but rather something to
> be found in that model. A relatively recent acquisition of that model is
> known as the "subconscious". A quite recent addition are "programs",
> "sub-programs", "wrong wiring", etc.
My arm seems able to rise in two important ways: voluntarily and
involuntarily (I don't know what "gratuitously" means). It is not a
matter of definition that we feel as if we are causing the motion in
the voluntary case; it is a matter of immediate experience. Whether
or not that experience is veridical depends on various other factors,
such as the true order of the events in question (brain activity,
conscious experience, movement) in real time, and the relation of the
experiential to the physical (i.e., whether or not it can be causal). The
same question does indeed apply to willed changes in the focus of
attention. If free will "is something that features in the model
people have of 'themselves'," then the question to ask is whether that
model is illusory. Consciousness itself cannot be something found in
a model (although the concept of consciousness might be) because
consciousness is simple the capacity to have (or the having of)
experience. (My responses to the concept of the "subconscious" and the
over-interpretation of programs and symbols are described earlier in
this module.
> A sufficiently "intelligent" machine, able to pass not only the
> dinner-conversation test but also a sophisticated Turing test,
> must have a model of itself. Using that model, and observing its
> own behaviour (including "internal" behaviour!), it will be led to
> conclude not only that it has an appetite, but also volition and
> awareness...Is it mistaken then? Is the machine taken in by an illusion?
> "Can machines have illusions?"
What a successful candidate for the TTT will have to have is not
something we can decide by introspection. Doing hermeneutics on its
putative inner life before we build it would seem to be putting the
cart before the horse. The question whether machines can have
illusions (or appetites, or fears, etc.) is simply a variant on the
basic question of whether any organism or device other than oneself
can have experiences.
--
Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet
------------------------------
End of AIList Digest
********************
∂02-Dec-86 0450 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #276
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 2 Dec 86 04:50:34 PST
Date: Mon 1 Dec 1986 20:53-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #276
To: AIList@SRI-STRIPE.ARPA
AIList Digest Tuesday, 2 Dec 1986 Volume 4 : Issue 276
Today's Topics:
Administrivia - Proposed Split of This Group,
Philosophy - Searle, Turing, Symbols, Categories
----------------------------------------------------------------------
Date: 1 Dec 86 09:24:05 est
From: Walter Hamscher <hamscher@ht.ai.mit.edu>
Subject: Proposed: a split of this group
I empathize with the spirit of the motion. But is it really
neccessary to split the list? I think Ken does a really good
thing by putting warning labels on the philosophical
discussions: they're easy to skip over if you're not interested.
As long as he's willing to put the time into doing that, there's
no need for a split.
------------------------------
Date: Mon 1 Dec 86 10:10:19-PST
From: Stephen Barnard <BARNARD@SRI-IU.ARPA>
Subject: One vote against splitting the list
I for one would not like to see the AI-list divided into two --- one
for "philosophising about" AI and one for "doing" AI. Even those of
us who do AI sometimes like to read and think about philosophical
issues. The problem, if there is one, is that certain people have
been abusing the free access to the list that Ken rightfully
encourages. Let's please keep our postings to a reasonable volume
(per contributor). The list is not supposed to be anyone's personal
soapbox.
------------------------------
Date: 1 Dec 86 18:48:31 GMT
From: ihnp4!houxm!houem!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: Proposed: a split of this group
Just suggested by jbn@glacier.UUCP (John Nagle):
> I would like to suggest that this group be split into two groups;
>one about "doing AI" and one on "philosophising about AI", the latter
>to contain the various discussions about Turing tests, sentient computers,
>and suchlike.
Good idea. I was beginning to think the discussions of "when is an
artifice intelligent" might belong in "talk.ai." I was looking for
articles about how to do AI, and not finding any. The trouble is,
"comp.ai.how-to" might have no traffic at all.
We seem to be trying to "create artificial intelligence," with the
intent that we can finally achieve success at some point (if only we
knew how to define success). Why don't we just try always to create
something more intelligent than we created before? That way we can not
only claim nearly instant success, but also continue to have further
successes without end.
Would the above question belong in "talk.ai" or "comp.ai.how-to"?
Marty
M. B. Brilliant (201)-949-1858
AT&T-BL HO 3D-520 houem!marty1
------------------------------
Date: Sun, 30 Nov 1986 22:27 EST
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Searle, Turing, Symbols, Categories
Lambert Meertens asks:
If some things we experience do not leave a recallable trace,
then why should we say that they were experienced consciously?
I absolutely agree. In my book, "The Society of Mind", which will be
published in January, I argue, with Meertens, that the phenomena we
call consciousness are involved with our short term memories. This
explains why, as he Meertens suggests, it makes little sense to
attribute consciousness to rocks. It also means that there are limits
to what consciousness can tell us about itself. In order to do
perfect self-experiments upon ourselves, we would need perfect records
of what happens inside our memory machinery. But any such machinery
must get confused by self-experiments that try to find out how it
works - since such experiments must change the very records that
they're trying to inspect! This doesn't mean that consciousness
cannot be understood, in principle. It only means that, to study it,
we'll have to use the methods of science, because we can't rely on
introspection.
Below are a few more extracts from the book that bear
on this issue. If you want to get the book itself, it is
being published by Simon and Schuster; it will be printed
around New Year but won't get to bookstores until mid-February.
If you want it sooner, send me your address and I should be
able to send copies early in January. (Price will be 18.95 or
less.) Or send name of your bookstore so I can get S&S to
lobby the bookstore. They don't seem very experienced at books
in the AI-Psychology-Philosophy area.
In Section 15.2 I argue that although people usually assume that
consciousness is knowing what is happening in the minds, right at the
present time, consciousness never is really concerned with the
present, but with how we think about the records of our recent
thoughts. This explains why our descriptions of consciousness are so
queer: whatever people mean to say, they just can't seem to make it
clear. We feel we know what's going on, but can't describe it
properly. How could anything seem so close, yet always keep beyond
our reach? I answer, simply because of how thinking about our short
term memories changes them!
Still, there is a sense in which thinking about a thought is like from
thinking about an ordinary thing. Our brains have various agencies
that learn to recognize to recognize - and even name - various
patterns of external sensations. Similarly, there must be other
agencies that learn to recognize events *inside* the brain - for
example, the activities of the agencies that manage memories. And
those, I claim, are the bases of the awarenesses we recognize as
consciousness. There is nothing peculiar about the idea of sensing
events inside the brain; it is as easy for an agent (that is, a small
portion of the brain) to be wired to detect a *brain-caused
brain-event*, as to detect a world-caused brain-event. Indeed only a
small minority of our agents are connected directly to sensors in the
outer world, like those that sense the signals coming from the eye or
skin; most of the agents in the brain detect events inside of the
brain! IN particular, I claim that to understand what we call
consciousness, we must understand the activities of the agents that
are engaged in using and changing our most recent memories.
Why, for example, do we become less conscious of some things when we
become more conscious of others? Surely this is because some resource
is approaching some limitation - and I'll argue that it is our limited
capacity to keep good records of our recent thoughts. Why, for
example, do thoughts so often seem to flow in serial streams? It is
because whenever we lack room for both, the records of our recent
thoughts must then displace the older ones. And why are we so unaware
of how we get our new ideas? Because whenever we solve hard problems,
our short term memories become so involved with doing *that* that they
have neither time nor space for keeping detailed records of what they,
themselves, have done.
To think about our most recent thoughts, we must examine our recent
memories. But these are exactly what we use for "thinking," in the
first place - and any self-inspecting probe is prone to change just
what it's looking at. Then the system is likely to break down. It is
hard enough to describe something with a stable shape; it is even
harder to describe something that changes its shape before your eyes;
and it is virtually impossible to speak of the shapes of things that
change into something else each time you try to think of them. And
that's what happens when you try to think about your present thoughts
- since each such thought must change your mental state! Would any
process not become confused, which alters what it's looking at?
What do we mean by words like "sentience," "consciousness," or
"self-awareness? They all seem to refer to the sense of feeling one's
mind at work. When you say something like "I am conscious of what I'm
saying," your speaking agencies must use some records about the recent
activity of other agencies. But, what about all the other agents and
activities involved in causing everything you say and do? If you were
truly self-aware, why wouldn't you know those other things as well?
There is a common myth that what we view as consciousness is
measurelessly deep and powerful - yet, actually, we scarcely know a
thing about what happens in the great computers of our brains.
Why is it so hard to describe your present state of mind? One reason
is that the time-delays between the different parts of a mind mean
that the concept of a "present state" is not a psychologically sound
idea. Another reason is that each attempt to reflect upon your mental
state will change that state, and this means that trying to know your
state is like photographing something that is moving too fast: such
pictures will be always blurred. And in any case, our brains did not
evolve primarily to help us describe our mental states; we're more
engaged with practical things, like making plans and carrying them
out.
When people ask, "Could a machine ever be conscious?" I'm often
tempted to ask back, "Could a person ever be conscious?" I mean this
as a serious reply, because we seem so ill equipped to understand
ourselves. Long before we became concerned with understanding how we
work, our evolution had already constrained the architecture of our
brains. However we can design our new machines as we wish, and
provide them with better ways to keep and examine records of their own
activities - and this means that machines are potentially capable of
far more consciousness than we are. To be sure, simply providing
machines with such information would not automatically enable them to
use it to promote their own development and until we can design more
sensible machines, such knowledge might only help them find more ways
to fail: the easier to change themselves, the easier to wreck
themselves - until they learn to train themselves. Fortunately, we
can leave this problem to the designers of the future, who surely
would not build such things unless they found good reasons to.
(Section 25.4) Why do we have the sense that things proceed in
smooth, continuous ways? Is it because, as some mystics think, our
minds are part of some flowing stream? think it's just the opposite:
our sense of constant steady change emerges from the parts of mind
that manage to insulate themselves against the continuous flow of
time! In other words, our sense of smooth progression from one mental
state to another emerges, not from the nature of that progression
itself, but from the descriptions we use to represent it. Nothing can
*seem* jerky, except what is *represented* as jerky. Paradoxically,
our sense of continuity comes not from any genuine perceptiveness, but
from our marvelous insensitivity to most kinds of changes. Existence
seem continuous to us, not because we continually experience what is
happening in the present, but because we hold to our memories of how
things were in the recent past. Without those short-term memories,
all would seem entirely new at every instant, and we would have no
sense at all of continuity, or of existence.
One might suppose that it would be wonderful to possess a faculty of
"continual awareness." But such an affliction would be worse than
useless because, the more frequently your higher-level agencies change
their representations of reality, the harder for them to find
significance in what they sense. The power of consciousness comes not
from ceaseless change of state, but from having enough stability to
discern significant changes in your surroundings. To "notice" change
requires the ability to resist it, in order to sense what persists
through time, but one can do this only by being able to examine and
compare descriptions from the recent past. We notice change in spite
of change, and not because of it. Our sense of constant contact with
the world is not a genuine experience; instead, it is a form of what I
call the "Immanence illusion". We have the sense of actuality when
every question asked of our visual systems is answered so swiftly that
it seems as though those answers were already there. And that's what
frame-arrays provide us with: once any frame fills its terminals, this
also fills the terminals of the other frames in its array. When every
change of view engages frames whose terminals are already filled,
albeit only by default, then sight seems instantaneous.≠
------------------------------
End of AIList Digest
********************
∂04-Dec-86 0041 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #277
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 4 Dec 86 00:41:03 PST
Date: Wed 3 Dec 1986 22:22-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #277
To: AIList@SRI-STRIPE.ARPA
AIList Digest Thursday, 4 Dec 1986 Volume 4 : Issue 277
Today's Topics:
Seminars - Formal Properties of Version Spaces (Rutgers) &
Machine Learning and Discovery (UTexas) &
Nonmonotonic Inheritance Systems (CMU) &
Possible-World Semantics (CMU) &
Proofs, Deductions, Chains of Reasoning (Buffalo) &
A Higher-Order Logic for Programming (UPenn) &
Non-Strict Class Hierarchies in Modeling Laguages (UPenn)
----------------------------------------------------------------------
Date: 1 Dec 86 10:34:26 EST
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Seminar - Formal Properties of Version Spaces (Rutgers)
This Thursday, December 4th, at 10 AM in Hill-250, Tony Vandermude will
present a ML talk entitled "Some Formal Properties of Version Spaces".
The abstract follows.
Some Formal Properties of Version Spaces
Tony Vandermude
(vandermu@topaz.rutgers.edu)
A general definition of problems and problem solving is presented and the
technique of Version Spaces is formally defined. Learning using Version
Spaces is compared to Identification in the Limit as found in the work on
Inductive Inference, and some properties of Version Spaces are defined. The
results given address the types of problems that Version Spaces are best
equipped to solve, what characteristics make it possible to apply this
technique and where problems may arise. It is found that when the standard
notion of a Version Space is considered, the learning process is reliable
and consistent with the input, and new versions added to the space must have
a superset-subset relationship to the previous models. It is shown that if
the finite sets and their complements are included as models in the space,
then a Version Space will learn any recursively enumerable class of
recursive sets. However, if the complements of the finite sets are removed,
then even simple classes cannot be learned reliably with a Version Space.
Mention is also made of the effects of error in data presentation - if there
is no a priori method of determining correctness of the data, convergence to
a correct model cannot be guaranteed.
------------------------------
Date: Mon 1 Dec 86 14:20:07-CST
From: Robert L. Causey <AI.CAUSEY@R20.UTEXAS.EDU>
Subject: Seminar - Machine Learning and Discovery (UTexas)
Philosophy Colloquy
University of Texas at Austin
A COMPUTER SYSTEM FOR LEARNING AND DISCOVERY
by
Arthur W. Burks, Professor Emeritus
University of Michigan, Ann Arbor
Friday, December 5, 3 - 5 p.m.
Philosophy Conference Room, WAG 316
This colloquy will discuss relationships between inductive reasoning, learning,
evolution, and computer designs. Professor Burks will discuss recent work on
classifier systems that he has done together with his colleague, John Holland.
Copies of a background paper, "A Radically Non-Von Architecture for Learning
and Discovery", are available in the Philosophy Department's Brogan Reading
Room.
------------------------------
Date: 1 December 1986 1020-EST
From: Elaine Atkinson@A.CS.CMU.EDU
Subject: Seminar - Nonmonotonic Inheritance Systems (CMU)
SPEAKER: Richmond Thomason, University of Pittsburgh
TITLE: "Issues in the design of nonmonotonic inheritance systems"
DATE: Thursday, December 4
TIME: 4:00 p.m.
PLACE: Adamson Wing, Baker Hall
ABSTRACT: Early attempts at combining multiple inheritance with
exceptions were based on straightforward extensions to tree-
structured inheritance systems, and were theoretically unsound.
Two well-know examples are FRL and NETL. In The Mathematics
of Inheritance Systems (TMOIS), Touretzky described two classes
of problems that these systems cannot handle. One involves
reasoning with true but redundant assertions; the other involves
ambiguity.
The substance of TMOIS was the definition and analysis of a
theoretically sound multiple inheritance sytem, along with
some inference algorithms based on parallel market propagation.
Now, however, we find that there appear to be other definitions
for inheritance that are equally sound and intuitive, but which
do not always agree with the system defined in TMOIS. In this
presentation we lay out a partial design space for sound
inheritance systems and describe some interesting properties that
result from certain strategic choices of inheritance definitions.
The best way to define inheritance -- if there is one best way --
may lie somewhere in this space, but we are not yet ready to say
what it might be.
------------------------------
Date: 2 Dec 86 16:06:05 EST
From: Daniel.Leivant@theory.cs.cmu.edu
Subject: Seminar - Possible-World Semantics (CMU)
Professor Robert Tennent of Queen's University (Ontario) will
be visiting the Department from Wednesday (Dec 3rd) to Friday noon
(Dec 5th). People interested in meeting with him should contact
Theona Stefanis (@a, x3825).
======================================================================
LOGIC COLLOQUIUM (CMU/PITT)
Speaker: Robert D. Tennent (Queen's University)
Topic: Possible-World Semantics of Programming Languages and Logics
Time: Thursday, December 4, 3:30
Place: Wean 4605
A category-theoretic formulation of a form of
possible-world semantics allows elegant solutions to some
difficult problems in the modeling of (i) stack-oriented
storage management; (ii) Reynolds's "specification logic" (a
generalization of Hoare's logic for Algol 60-like languages
with procedures); and (iii) side-effect-free block expres-
sions. A recent development has been the realization that
it is possible and desirable to use a kind of generalized
domain theory in this framework. Some additional possible
applications of the approach to modeling abstract interpre-
tations and the polymorphic lambda calculus will also be
sketched.
------------------------------
Date: 2 Dec 86 19:51:19 GMT
From: rutgers!clyde!watmath!sunybcs!rapaport@think.com (William J.
Rapaport)
Subject: Seminar - Proofs, Deductions, Chains of Reasoning (Buffalo)
State University of New York at Buffalo
BUFFALO LOGIC COLLOQUIUM
1986-1987
Fifth Meeting
Tuesday, Dec. 9 4:00 p.m. Baldy 684, Amherst Campus
John Corcoran
Department of Philosophy
SUNY Buffalo
"Proofs, Deductions, Chains of Reasoning"
This talk begins with a brief review of the deductive and hypothetico-
deductive methods and then introduces the distinction between proofs and
deductions. The core of the paper is a discussion of the logical, his-
torical, epistemic, pragmatic, and heuristic ramifications of the dis-
tinction between proofs and deductions.
References:
J. Corcoran, "Conceptual Structure of Classical Logic,"
_Phil. & Phen. Res_ 33 (1972) 25-47.
A. Tarski, _Intro. to Logic_, Ch. 6 (1941).
For more information, contact John Corcoran, (716) 636-2438.
William J. Rapaport
Assistant Professor
Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260
(716) 636-3193, 3180
uucp:
.!{allegra,boulder,decvax,mit-ems,nike,rocksanne,sbcs,watmath}!sunybcs!rapaport
csnet: rapaport@buffalo.csnet
bitnet: rapaport@sunybcs.bitnet
------------------------------
Date: Wed, 3 Dec 86 13:13 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - A Higher-Order Logic for Programming (UPenn)
Dissertation Defense
Computer and Information Science
University of Pennsylvania
A HIGHER-ORDER LOGIC AS THE BASIS FOR LOGIC PROGRAMMING
GOPALAN NADATHUR
(gopalan@cis.upenn.edu)
The objective of this thesis is to provide a formal basis for higher-order
features in the paradigm of logic programming. Towards this end, a
non-extensional form of higher-order logic that is based on Church's simple
theory of types is used to provide a generalisation to the definite clauses of
first-order logic. Specifically, a class of formulas that are called
higher-order definite sentences is described. These formulas extend definite
clauses by replacing first-order terms by the terms of a typed lambda calculus
and by providing for quantification over predicate and function variables. It
is shown that these formulas together with the notion of a proof in the
higher-order logic provide an abstract description of computation that is akin
to the one in the first-order case. While the construction of a proof in a
higher-order logic is often complicated by the task of finding appropriate
substitutions for predicate variables, it is shown that the necessary
substitutions for predicate variables can be tightly constrained in the context
of higher-order definite sentences. This observation enables the description of
a complete theorem-proving procedure for these formulas. The procedure
constructs proofs essentially by interweaving higher-order unification with
backchaining on implication, and constitutes a generalisation to the
higher-order context of the well-known SLD-resolution procedure for definite
clauses. The results of these investigations are used to describe a logic
programming language called lambda Prolog. This language contains all the
features of a language such as Prolog, and, in addition, possesses certain
higher-order features. The uses of these additional features are illustrated,
and it is shown how the use of the terms of a (typed) lambda calculus as data
structures provides a source of richness to the logic programming paradigm.
2:30 pm December 5, 1986
Room 23, Moore School
University of Pennsyulvania
Thesis Supervisor: Dale Miller
Committee: Tim Finin, Jean Gallier (Chairman), Andre Scedrov, Richard Statman
------------------------------
Date: Wed, 3 Dec 86 23:29 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Seminar - Non-Strict Class Hierarchies in Modeling Laguages
(UPenn)
DBIG Meeting
Computer and Information Science
University of Pennsylvania
10:30am; 12-5-86; 555 Moore
ON NON-STRICT CLASS HIERARCHIES IN CONCEPTUAL MODELING LAGUAGES.
Alexander Borgida
Rutgers University
One of the cornerstones of the conceptual modeling languages devised for the
specification and implementation of Information Systems is the idea of objects
grouped into classes. I begin by reviewing the various roles played by this
concept: specification of type constraints, repository of logical constraints
to be verified, and maintenance of an associated set of objects (the "extent").
I then consider a second feature of these languages -- the notion of class
hierarchies -- and after outlining its benefits, present arguments against a
strict interpretation of class specialization and the notion of inheritance.
Additional consideration of the concept of "default inheritance" leads to a
list of desirable features for a language mechanism supporting non-strict
taxonomies of classes: ones in which some class definitions may contradict
portions of their superclass definitions, albeit in a controlled way.
I conclude by presenting some preliminary thoughts on a type system and type
verification mechanism which would allow one to check that programs written in
the presence of exceptional types will not go wrong.
------------------------------
End of AIList Digest
********************
∂04-Dec-86 0234 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #278
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 4 Dec 86 02:34:09 PST
Date: Wed 3 Dec 1986 22:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #278
To: AIList@SRI-STRIPE.ARPA
AIList Digest Thursday, 4 Dec 1986 Volume 4 : Issue 278
Today's Topics:
Course - Parallel Architecture and AI (UPenn)
----------------------------------------------------------------------
Date: 1 Dec 86 19:31:49 EST
From: BORGIDA@RED.RUTGERS.EDU
Subject: Course - Parallel Architecture and AI (UPenn)
Posted-Date: Mon, 17 Nov 86 09:51 EST
From: Tim Finin <Tim@cis.upenn.edu>
Here is a description of a 1 and 1/2 day course we are putting on for
the Army Research Office. We are opening it up to some people from
other universities and nearby industry. We have set a modest fee of
$200. for non-academic attendees and is free for academic colleagues.
Please forward this to anyone who might be interested.
SPECIAL ARO COURSE ANNOUNCEMENT
COMPUTER ARCHITECTURES FOR PARALLEL PROCESSING IN AI APPLICATIONS
As a part of our collaboration with the Army Research Office, we are
presenting a three-day course on computer architectures for parallel
processing with an emphasis on their application to AI problems. Professor
Insup Lee has organized the course which will include lectures by
professors Hossam El Gindi, Vipin Kumar (from the University of Texas at
Austin), Insup Lee, Eva Ma, Michael Palis, and Lokendra Shastri.
Although the course is being sponored by the ARO for researchers from
various Army research labs, we are making it available to colleagues from
within the University of Pennsylvania as well as some nearby universities
and research institutions.
If you are interested in attending this course, please contact Glenda Kent
at 898-3538 or send electronic mail to GLENDA@CIS.UPENN.EDU and indicate
your intention to attend. Attached is some addiitonal information on the
course.
Tim Finin
TITLE Computer Architectures for Parallel Processing in AI
Applications
WHEN December 10-12, 1986 (from 9:00 a.m. 12/10 to 12:00 p.m.
12/12)
WHERE room 216, Moore School (33rd and Walnut), University of
Pennsylvania, Philadelphia, PA.
FEE $200. for non-academic attendees
PRESENTERS Hossam El Gindi, Vipin Kumar, Insup Lee, Eva Ma, Michael
Palis, Lokendra Shastri
POC Glenda Kent, 215-898-3538, glenda@cis.upenn.edu
Insup Lee, lee@cis.upenn.edu
INTENDED FOR Research and application programmers, technically oriented
managers.
DESCRIPTION This course will provide a tutorial on parallel
architectures, algorithms and programming languages, and
their applications to Artificial Intelligence problems.
PREREQUISITES Familiarity with basic computer architectures, high-level
programming languages, and symbolic logic, knowledge of
LISP and analysis of algorithms desirable.
COURSE CONTENTS This three day tutorial seminar will present an overview of
parallel computer architectures with an emphasis on their
applications to AI problems. It will also supply the
neccessary background in parallel algorithms, complexity
analysis and programming languages. A tentative list of
topics is as follows:
- Introduction to Parallel Architectures - parallel
computer architectures such as SIMD, MIMD, and
pipeline; interconnection networks including
ring, mesh, tree, multi-stage, and cross-bar.
- Parallel Architectures for Logic Programming -
parallelism in logic programs; parallel execution
models; mapping of execution models to
architectures.
- Parallel Architectures for High Speed Symbolic
Processing - production system machines (e.g,
DADO); tree machines (e.g., NON-VON); massively
parallel machines (e.g., Connection Machine,
FAIM).
- Massive Parallelism in AI - applications of the
connectionist model in the areas of computer
vision, knowledge representation, inference, and
natural language understanding.
- Introduction to Parallel Computational Complexity
- formal parallel computation models such as
Boolean circuits, alternating Turing machines,
parallel random-access machines; relations
between sequential and parallel models of
computation; parallel computational complexity of
AI problems such as tree, graph searches,
unification and natural language parsing.
- Parallel Algorithms and VLSI - interconnection
networks for VLSI layout; systolic algorithms and
their hardware implementations;
- Parallel Programming Languages - language
constructs for expressing parallelism and
synchronization; implementation issues.
COMPUTER ARCHITECTURES FOR PARALLEL PROCESSING
IN AI APPLICATIONS
COURSE OUTLINE
The course will consist of seven lectures, where each lecture is between
two to three hours.
The first lecture introduces the basic concepts of parallel computer
architectures. It explains the organization and applications of different
classes of parallel computer architectures such as SIMD, MIMD, and
pipeline. It then discusses the properties and design tradeoffs of various
types of interconnection networks for parallel computer architectures. In
particular, the ring, mesh, tree, multi-stage, and cross-bar will be
evaluated and compared.
The second and third lectures concentrate on parallel architectures for AI
applications. The second lecture overviews current research efforts to
develop parallel architectures for executing logic programs. Topics covered
will include potential for exploiting parallelism in logic programs,
parallel execution models, and mapping of execution models to
architectures. Progress made so far and problems yet to be solved in
developing such architectures will be discussed. The third lecture
overviews the state-of-the-art of architectures for performing high speed
symbolic processing. In particular, we will describe parallel architectures
for executing production systems such as DADO, tree machines (e.g.,
NON-VON), massively parallel machines (e.g., Connection Machine, FAIM).
The fourth lecture explains why the von Neuman architecture is
inappropriate for AI applications and motivates the need for pursuing the
connectionist approach. To justify the thesis, some specific applications
of the connectionist model in the areas of computer vision, knowledge
representation, inference, and natural language understanding will be
discussed. Although the discussion will vary at the levels of detail, we
plan to examine at least one effort in detail, namely the applicability and
usefulness of adopting a connectionist approach to knowledge representation
and limited inference.
The fifth lecture introduces the basic notions of parallel computational
complexity. Specifically, the notion of ``how difficult a problem can be
solved in parallel'' is formalized. To formulate this notion precisely, we
will define various formal models of parallel computation such as boolean
circuits, alternating Turing machines, and parallel random-access machines.
Then, the computational complexity of a problem is defined in terms of the
amount of resources such as parallel time and number of processors needed
to solve it. The relations between sequential and parallel models of
computation, as well as characterizations of ``efficiently parallelizable''
and ``inherently sequential'' problems are also given. Finally, the
parallel computational complexity of problems in AI (e.g., tree and graph
searches, unification and natural language parsing) are discussed.
The sixth lecture discusses how to bridge the gap between design of
parallel algorithms and their hardware implementations using the present
VLSI technology. This lecture will overview interconnection networks
suitable for VLSI layout. Then, different systolic algorithms and their
hardware implementations will be discussed. To evaluate their
effectiveness, we compare how important data storage schemes, like queue
(FIFO), dictionary, and matrix manipulation, can be implemented on various
systolic architectures.
The seventh lecture surveys various parallel programming languages. In
particular, the lecture will describe extensions made to sequential
procedural, functional, and logic programming languages for parallel
programming. Language constructs for expressing parallelism and
synchronization, either explicitly or implicitly, will be overviewed and
their implementation issues will be discussed.
------------------------------
End of AIList Digest
********************
∂04-Dec-86 0437 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #279
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 4 Dec 86 04:37:38 PST
Date: Wed 3 Dec 1986 22:38-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #279
To: AIList@SRI-STRIPE.ARPA
AIList Digest Thursday, 4 Dec 1986 Volume 4 : Issue 279
Today's Topics:
Policy - AI Bibliographic Format & Splitting the List,
Psychology - Subconscious,
Philosophy - Searle, Turing, Nagel
----------------------------------------------------------------------
Date: Wed, 3 Dec 86 16:08:08 est
From: amsler@flash.bellcore.com (Robert Amsler)
Subject: AI Bibliographic format
Can something be done to minimize the idiosyncratic special character
usage in the bibliographies? I don't mind a format with tagged fields,
but one only designed to be read by one particular text formatting system
is a bit much for human readership. Is it really necessary to encode
font changes as something as odd as \s-1...\s0 and I don't even know
how to read entries such as the last one for the chapters in the
Grishman and Kittridge Sublanguage book.
...
.TS
tab(~);
l l.
N.Sager~T{
Sublanguage: Linguistic Phenomenon, Computational Tool
T}
J. Lehrberger~Sublanguage Analysis
E. Fitzpatrick~T{
The Status of Telegraphic Sublanguages
T}
J. Bachenko
D. Hindle
J. R. Hobbs~Sublanguage and Knowledge
D. E. Walker~T{
The Use of Machine Readable Dictionaries in Sublanguage Analysis
T}
R. A. Amsler
C. Friedman~T{
Automatic Structuring of Sublanguage Information: Application to
Medical Narrative
T}
E. Marsh~T{
General Semantic Patterns in Different Sublanguages
T}
C. A. Montgomery~T{
A Sublanguage for Reporting and Analysis of Space Events
T}
B. C. Glover
T. W. Finin~T{
Constraining the Interpretation of Nominal Compounds in a Limited Context
T}
G. Dunham~T{
The Role of Syntax in the Sublanguage of Medical Diagnostic Statements
T}
J. Slocum~T{
How One Might Automatically Identify and Adapt to a Sublanguage
T}
L. Hirschman~T{
Discovering Sublanguage Structures
T}
.TE
Huh!!!
------------------------------
Date: Wed, 03 Dec 86 09:02:05 -0500
From: dchandra@ATHENA.MIT.EDU
Subject: Re: AIList Digest V4 #276
Hi,
Pls do NOT split the group. Several reasons:
* It is nice to know what is going on in all parts of AI
* One can always skip over stuff one does not want to read
* If I have something of interest to more than one group then
I will have to send info to all the groups
* MOST importantly, reading notesfiles takes time. If we introduce
more notesfiles, one will have to wade through many more mailing
lists.
Thanks
Navin CHandra
IESL
MIT
------------------------------
Date: 3 Dec 86 13:08 EST
From: SHAFFER%SCOVCB.decnet@ge-crd.arpa
Subject: the long debate of philosophical issues
In response to the idea of splitting the group I think that
in the long-run it would be a bad idea. But I do support the later
suggestion that the length of these dialogs must be limited. As we
at GE get things on a limit "bunch" basis, we look through the topics
first before reading all of the bulletins. Recently it has become
overloaded with long-winded, one-sided, very, very long speeches.
Besides the pure waste of computer time and disk space, the people
arguing are not going to changes their minds, they are just exercising
their fingers. I am not fluent in the language of this "turing, searle"
debate, but I can see that the points of interest are becoming a bit
on the "off on a tangent" side. Lets all encourage discussion, but
lets give everyone a chance to bring up interesting and beneficial topics.
Lets not spend the board's entire volume to whether a computer "feels".
Earl Shaffer, GE, Philadelphia
------------------------------
Date: Wed, 3 Dec 86 15:58:50 est
From: amsler@flash.bellcore.com (Robert Amsler)
Subject: Splitting the List
I don't think the idea of splitting the list is practical. The real
question is whether the philosophy discussion can sustain a whole
mailing list on its own. I doubt it could. This is a topic which
will eventually fade and to split the list doubles the work for the
moderator. Is someone offering to become the new moderator of the
AI Philosophy list?
[I should mention that there is a Metaphilosophers list at
MIT-OZ@MC.LCS.MIT.EDU, as well as the Psychnet Newsletter
from EPsynet%UHUMVM1.BITNET@WISCVM. The Phil-Sci list at MIT
used to carry much more of such philosophical discussion than
AIList has had recently. (Part of that was due to the quotations
being nested four levels deep, which obviously multiplies the
net traffic.) I am surprised -- but relieved -- that so few
AIList readers have participated in these exchanges. Perhaps
the philosophers dropped out long ago because AIList has had
so little discussion of AI foundations. My own bias is toward
computational techniques for coaxing more intelligent behavior
from computers, regardless of theoretical adequacy. -- KIL]
------------------------------
Date: Tue, 2 Dec 86 23:44:14 EST
From: "Keith F. Lynch" <KFL%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU>
Subject: Subconscious
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan Harnad)
Lambert Meertens (lambert@boring.uucp) of CWI, Amsterdam, writes:
> Sometimes we are conscious of certain sensations. Do these
> sensations disappear if we are not conscious of them? Or do they
> go on on a subconscious level? ...
The following point is crucial to a coherent discussion of the
mind/body problem: The notion of an unconscious sensation (or, more
generally, an unconscious experience) is a contradiction in terms!
[Test it in the form: "unexperienced experience." Whatever might that
mean? Don't answer. The Viennese delegation (as Nabokov used to call
it) has already made almost a century's worth of hermeneutic hay with the
myth of the "subconscious" -- a manifest nonsolution to the mind/body
problem that simply consisted of multiplying the mystery by two.
There is plenty of evidence for the subconscious, i.e. something that
acts like a person but which one is not conscious of the thoughts or
actions of.
One explanation is that the subconscious is a seperate consciousness.
Split brain experiments give convincing evidence that there can be at
least two seperate consciousnesses in one individual. Does the brain
splitting operation create a new consciousness? Or were there always
two?
...Keith
------------------------------
Date: 30 Nov 86 17:25:52 GMT
From: mcvax!ukc!rjf@seismo.css.gov (R.J.Faichney)
Subject: Re: Searle, Turing, Nagel
In article <230@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>On mod.ai, rjf@ukc.UUCP <8611071431.AA18436@mcvax.uucp>
>Rob Faichney (U of Kent at Canterbury, Canterbury, UK) made
>nonspecific reference ...
Sorry - the articles at issue were long gone, before I learned how to
use this thing.
>... I'm not altogther certain ... intended as a followup to ...
>"Searle, Turing, Categories, Symbols," but ...
>I am responding on the assumption that it was.
It was not. See below.
>... Whether consciousness is a necessary
>condition for intelligence is probably undecidable, and goes to the
>heart of the mind/body problem and its attendant uncertainties.
We have various ways of getting around the problems of inadequate definitions
in these discussions, but I think we've run right up against it here. In
psychological circles, as you know, intelligence is notorious for being
difficult to define.
>The converse proposition -- that intelligence is a necessary condition for
>consciousness is synonymous with the proposition that consciousness is
>a sufficient condition for intelligence, and this is indeed being
>claimed (e.g., by me).
The problem here is whether we define intelligence as implying consciousness.
I am simply suggesting that if we (re)define intelligence as *not* implying
consciousness, we will lose nothing in terms of the utility of the concept
of intelligence, and may gain a great deal regarding our understanding of the
possibilities of machine intelligence and/or consciousness.
>If the word
>"intelligence" has any meaning at all, over and above displaying ANY
>arbitrary performance at all...
I'm afraid that I don't think it has very much meaning, beyond the naive,
relative usage of 'graduates tend to be more intelligent than non-graduates'.
>...the Total Turing Test...amounts to equating
>intelligence with total performance capacities ...
>... also coincides with our only basis for inferring that
>anyone else but ourselves has a mind (i.e., is conscious).
>There is no contradiction between agreeing that intelligence admits
>of degrees and that mind is all-or-none.
But intelligence implies mind? Where do we draw the line? Should an IQ of
=> 40 mean that something is conscious, while < 40 denotes a mindless
automaton? You say your Test allows for cross-species and pathological
variants, but surely this relative/absolute contradiction remains.
>> Animals probably are conscious without being intelligent. Machines may
>> perhaps be intelligent without being conscious.
>Not too good to be true: Too easy.
Granted. I failed to make clear that I was proposing a (re)definition of
intelligence, which would retain the naive usage - including that animals are
(relatively) unintelligent - while dispensing with the theoretical problems.
>...the empirical question of what intelligence is cannot be settled by a
>definition...
Indeed, it cannot begin to be tackled without a definition, which is what
I am trying to provide. My proposition does not settle the empirical
question - it just makes it manageable.
>Nagel's point is that there is
>something it's "like" to have experience, i.e., to be conscious, and
>that it's only open to the 1st person point of view. It's hence radically
>unlike all other "objective" or "intersubjective" phenomena in science
>(e.g., meter-readings)...
Surely intersubjectivity is at least as close to subjectivity as to
objectivity. Instead of meter readings, take as an example the mother-
child relationship. Like any other, it requires responsive feedback, in
terms in this case of cuddling, cooing, crying, smiling, and it is where
the baby learns to relate and communicate with others. I say that it's one
*essential* characteristic is intersubjectivity. Though the child does not
consciously identify with the adult, there is nevertheless an intrinsic
tendency to copy gestures, etc., which will be complemented and completed
at maturity by a (relatively) unselfish appreciation of the other person's
point of view. This tendency is so profound, and so bound to our origins,
both ontogenic and philogenic(sp?) that to ascribe consciousness to something
man-made, no matter how perfect it's performance, will always require an
effort of will. Nor could it ever be intellectually justified.
The ascription of consciousness says infinitely more about the ascriptor
than the ascriptee. It means 'I am willing and able to identify with this
thing - I really believe that it is like something to be this thing.' It
is inevitably, intrinsically spontaneous and subjective. You may be willing
to identify with something which can do anything you can. I am not. And,
though this is obviously sheer guesswork, I'm willing to bet a lot of money
that the vast majority of people (*not* of AIers) would be with me. And, if
you agree that it's subjective, why should anyone know better than the man
in the street? (I'm speaking here, of course, about what people would do,
not what they think they might do - I'm not suggesting that the problem
could be solved by an opinion poll!)
>> So what, really, is consciousness? According to Nagel...
>> This accords with Minsky (via Col. Sicherman):
>> 'consciousness is an illusion to itself but a genuine and observable
>> phenomenon to an outside observer...'
>The quote above (via the Colonel) is PRECISELY THE OPPOSITE of Nagel's
>point. The only aspect of conscious experience that involves direct
>observability is the subjective, 1st-person aspect...
>Let's call this private terrain Nagel-land.
>The part others "can identify" is Turing-land: Objective, observable
>performance (and its structural and functional substrates). Nagel's point
>is that Nagel-land is not reducible to Turing-land.
The part others "can identify with" is Nagel-land. People don't identify
structural and functional substrates, they just know what it's like to be
people. This fact does not belong to purely subjective Nagel-land or to
perfectly objective Turing-land. It has some features of each, and
transcends both. Consciousness as a fact is not directly observable - it
is direct observation. Consciousness as a concept is not directly observable
either, but it is observable in a very special way, which for *practical*
purposes is incorrigible, to the extent that it is not testable, but our
intuitions seem perfectly workable. It cannot examine itself ('...is an
illusion to itself...') but may quite validly be seen in others ('...a
genuine and observable fact to an outside observer...').
>... hardly amounts to an objective contribution to cognitive science.
I'm not interested in the Turing Test (see above) but surely to clarify
the limits of objectivity is an objective contribution.
>> It may perhaps be supposed that the concept of consciousness evolved
>> as part of a social adaptation...
>Except that Nagel would no doubt suggest (and I would agree) that
>there's no reason to believe that the asocial or minimally social
>animals are not conscious too.
I said the *concept* of consciousness...
>> ...When I suppose myself to be conscious, I am imagining myself
>> outside myself...
>When I feel a pain -- when I am in the qualitative state of
>knowing what it's like to be feeling a pain -- I am not "supposing"
>anything at all.
When I feel a pain I'm being conscious. When I suppose etc., I'm thinking
about being conscious. I'm talking here about thinking about it, because
in order to ascribe consciousness to a machine, we first have to think about
it, unlike our ascription of consciousness to each other. Unfortunately,
such intrinsically subjective ascriptions are much more easily made via
spontanaeity than via rationalisation. I would say, in fact, that they may
only be spontaneous.
>Some crucial corrections that may set the whole matter in a rather different
>light: Subjectively (and I would say objectively too), we all know that
>OUR OWN consciousness is real.
Agreed.
>Objectively, we have no way of knowing
>that anyone else's consciousness is real.
Agreed.
>Because of the relationship
>between subjectivity and objectivity, direct knowledge of the kind we
>have in our own case is impossible in any other.
Agreed.
>The pragmatic
>compromise we practice every day with one another is called the Total
>Turing Test:
I call it natural, naive intersubjectivity.
>Ascertaining that others behave indistinguishably from our
>paradigmatic model for a creature with consciousness: ourselves.
They may behave indistinguishably from ourselves, but it's not only snobs
who ask 'What do we know about their background?'. That sort of information
is perfectly relevant. Why disallow it? And why believe that a laboratory-
constructed creature feels like I do, no matter how perfect it's social
behaviour? Where subjectivity is all, prejudice can be valid, even
necessary. What else do we have?
>...a predictive and explanatory causal thoery of mind.
Is not something that we can't get by without.
>...if we follow Nagel, our inferences are not meaningless, but in some
>respects incomplete and undecidable.
I may be showing my ignorance, but to me if something is (inevitably?)
'incomplete and undecidable', it's pretty nearly meaningless for most
purposes.
To sum up: there is actually quite a substantial area of agreement between
us, but I don't think that you go quite far enough. While I cannot deny
that much may be learned from attempting computer and/or robot simulation
of human performance, there remains the fact that similar ends may be
achieved by different means; that a perfectly convincing robot might differ
radically from us in software as well as hardware. In short, I think that
the computer scientists have much more to gain from this than the
psychologists. As a former member of the latter category, and a present
member of the former (though not an AIer!), I am not complaining.
--
Robin Faichney
UUCP: ...mcvax!ukc!rjf Post: RJ Faichney,
Computing Laboratory,
JANET: rjf@uk.ac.ukc The University,
Canterbury,
Phone: 0227 66822 Ext 7681 Kent.
CT2 7NF
------------------------------
End of AIList Digest
********************
∂08-Dec-86 2358 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #280
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 8 Dec 86 23:58:25 PST
Date: Mon 8 Dec 1986 21:44-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #280
To: AIList@SRI-STRIPE.ARPA
AIList Digest Tuesday, 9 Dec 1986 Volume 4 : Issue 280
Today's Topics:
Correction - Parallel Archictures Course,
Seminars - Interval Temporal Logic for Parallel Programming (SRI) &
An EBG system which Learns from Failures (Rutgers) &
Parallelization of Alpha-Beta Search (CMU),
Conferences - Logical Solutions to the Frame Problem &
AI in Engineering
----------------------------------------------------------------------
Date: Thu, 4 Dec 86 15:27 EST
From: Tim Finin <Tim@cis.upenn.edu>
Subject: Correction - Parallel Archictures Course
***** IMPORTANT CORRECTION *****
Alex Borgida forwarded a message from me to AILIST that needs correcting.
This message concerns a short course on "Computer Architectures For Parallel
Processing In AI Applications" that we are giving here at Penn next week for
the Army Research Office. In this message I announced that the course would
be open to colleagues from nearby insitutions. Unfortunately, since I sent
Alex the message, the status of the course has changed and it will no longer
be open to outside people other than those whom ARO is sponsoring. We're
sorry for this confusion.
Tim
[Actually, I was the one who forwarded the message from a
local bboard. I should have added a note to that effect. -- KIL]
------------------------------
Date: Fri 5 Dec 86 11:44:40-PST
From: Amy Lansky <LANSKY@SRI-VENICE.ARPA>
Subject: Seminar - Interval Temporal Logic for Parallel Programming
(SRI)
USING INTERVAL TEMPORAL LOGIC FOR PARALLEL PROGRAMMING
Roger Hale
Computer Laboratory
Cambridge University, England
4:15 PM, WEDNESDAY, December 10
SRI International, Building E, Room EJ228
Interval Temporal Logic (ITL) was originally proposed by Moszkowski
for reasoning about the behaviour of hardware devices. Since then it
has shown itself to have a much wider field of application, and has
been used to specify a variety of concurrent and time-dependent
systems at different levels of abstraction. Moreover, it has been
found that a useful subset of ITL specifications are executable in the
programming language TEMPURA. Experience gained from prototyping
temporal logic specifications in Tempura leads us to believe that this
is a practical (and enjoyable) way to produce formal specifications.
In the talk I will present some temporal logic specifications which
are also Tempura programs, and will indicate how these programs are
executed by the Tempura interpreter. I will give examples of both
high- and low-level specifications, and will describe a way to relate
different levels of abstraction. In conclusion, I will outline some
future plans, which include the provision of a decision support system
for Tempura.
VISITORS: Please arrive 5 minutes early so that you can be escorted up
from the E-building receptionist's desk. Thanks!
NOTICE CHANGE IN USUAL DAY AND TIME!! (Wednesday, 4:15)
------------------------------
Date: 8 Dec 86 13:11:42 EST
From: Tom Fawcett <FAWCETT@RED.RUTGERS.EDU>
Subject: Seminar - An EBG system which Learns from Failures (Rutgers)
On Thursday, December 11th in Hill-250 at 10 AM, Neeraj Bhatnagar will
present a talk on learning from failures. The abstract follows.
PLEASE BE PROMPT; we only have the room until 11:10.
AN EBG SYSTEM THAT LEARNS FROM ITS FAILURES
I shall discuss my implementation of a design system that learns from its
failures. The learning technique used is the explanation based
generalization widely reported in the literature with the modification that
our system tries to explain the failures that it encounters in its search
for solution. These explanations give necessary conditions for success which
are used for pruning out the unacceptable solutions. The implemented system
reported here acts as a Generate and Test (GT) problem solver in its general
problem solver mode. In its learning mode it tries to explain the reason why
a generated solution turned out to be unacceptable and generalizes this
explanation to prune out the failure paths in future.
The test bed for experimenting with the suggested technique is a restricted
version of the floor planning domain. Due to the restrictions imposed by us
on the operators used for planning the failures that can occur while
planning have become monotonic in nature which has facilitated their
detection, explanation and recovery from them.
Time permitting, I shall also discuss some of the future directions of my
research which include detection, proof and recovery from non-monotonic
failures, defining new terms and new operators in the context of explanation
based learning and a suggested method for making more effective use of the
knowledge learned by explanation based generalization.
------------------------------
Date: 5 Dec 86 16:29:55 EST
From: Feng-Hsiung.Hsu@unh.cs.cmu.edu
Subject: Seminar - Parallelization of Alpha-Beta Search (CMU)
Large Scale Parallelization of Alpha-beta Search:
An Algorithmic and Architectural Study
Feng-hsiung Hsu
Time: Thursday, 6:00 pm, Dec. 11
Place: WH 4605
Abstract
This proposal presents a class of new parallel alpha-beta algorithms that
gives speedup arbitrarily close to linear when the game tree is
best-first ordered and sufficiently deep. It will also be shown that the
parallel algorithms strictly dominate the weaker form of alpha-beta
algorithm that does not use deep cutoff; that is, they never search a node
that is not explored by the weaker form of alpha-beta algorithm, and usually
search fewer nodes. Preliminary simulation results indicate that the
parallel algorithms are actually much better than the weak alpha-beta in
terms of the number of nodes searched. Moreover, unlike previous parallel
algorithms, the new parallel algorithms do not degrade drastically when
the number of processors exceeds certain small number typically around 6 to
8. In fact, based on simulation data, it appears that no serious
degradation of speedup would occur before technological considerations such
as system reliability limit the maximum speedup.
As an example of the applications of the parallel algorithms, the
possibility and complications of applying the algorithms to computer chess
will be examined. A new design for special purpose chess processors that is
orders of magnitude smaller than existing designs is presented as the basis
for a proposed multi-processor chess machine. Based on the measured data
from a single chip chess move generator that has already been fabricated, it
is estimated that with a 3-micron CMOS process a 3-chip chess processor, two
custom chips and one commercial SRAM, searching about one million positions
per second can be built. Some architectural considerations on how to
coordinate vast number of such processors will be presented here. In the
case that the proposed multi-processor machine cannot be completed in time,
a small scale system will be built using off-the-shelf components and the
move generators.
------------------------------
Date: Wed, 3 Dec 86 13:20:02 CST
From: Glenn Veach <veach%ukans.csnet@RELAY.CS.NET>
Subject: Conference - Logical Solutions to the Frame Problem
FINAL CALL
FOR PARTICIPATION
WORKSHOP ON LOGICAL SOLUTIONS TO THE FRAME PROBLEM
The American Association for Artificial Intelligence (AAAI) is
sponsoring this workshop in Lawrence, Kansas, 13, 14, 15 April 1987.
The frame problem is one of the most fundamental problems in
Artificial Intelligence and essentially is the problem of describing in
a computationally reasonable manner what properties persist and what
properties change as action are performed. The intrinsic problem lies in
the fact that we cannot expect to be able to exhaustively list for every
possible action (or combination of concurrent actions) and for every
possible state of the world how that action (or concurrent actions) change
the truth or falsity of each individual fact. We can only list the obvious
results of the action and hope that our basic inferential system will be
able to deduce the truth or falsity of the other less obvious facts.
In recent years there have been a number of approaches to constructing
new kinds of logical systems such as non-monotonic logics, default logics,
circumscription logics, modal reflexive logics, and persistence logics which
hopefully can be applied to solving the frame problem by allowing the missing
facts to be deduced. This workshop will attempt to bring together the
proponents of these various approaches.
Papers on logics applicable to the problem of reasoning about such
unintended consequences of actions are invited for consideration. Two
copies of a full length paper should be sent to the workshop chairman
before Dec. 19, 1986. Acceptance notices will be mailed by December 26,
1986 along with instructions for preparing the final versions of accepted
papers. The final versions are due February 1, 1987.
In order to encourage vigorous interaction and exchange of ideas
the workshop will be kept small -- about 25 participants. There will
be individual presentations and ample time for technical discussions.
An attempt will be made to define the current state of the art and future
research needs.
Partial financial support for participants is available.
Workshop Chairman:
Dr. Frank M. Brown
Dept. Computer Science
110 strong Hall
The University of Kansas
Lawrence, Kansas
(913) 864-4482
mail net inquiries to: veach%ukans@csnet-relay.csnet
------------------------------
Date: Fri, 05 Dec 86 08:55:46 -0500
From: sriram@ATHENA.MIT.EDU
Subject: Conference - AI in Engineering
The call for papers for the SECOND INTERNATIONAL CONFERENCE ON
APPLICATIONS OF ARTIFICIAL INTELLIGENCE IN ENGINEERING appeared a
little late in the SIGART newsletter. A number of people requested
that we extend the deadline. In response to their request the last
date for submission of a 1000 word abstract is extended to Dec. 15th.
For more information on this conference contact Bob Adey at
617-933-7374 (The call for papers appeared in a previous issue of the
AILIST).
Sriram
------------------------------
End of AIList Digest
********************
∂09-Dec-86 0207 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #281
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 9 Dec 86 02:07:15 PST
Date: Mon 8 Dec 1986 22:01-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #281
To: AIList@SRI-STRIPE.ARPA
AIList Digest Tuesday, 9 Dec 1986 Volume 4 : Issue 281
Today's Topics:
Administrivia - BITNET Distribution,
Queries - Lisp for the Mac & Lisp Lore & Little Lisper &
Lisp Performance Benchmarks & Bibliographic Formatter,
AI Tools - Object-Oriented Programming in AI,
Ethics - AI and the Arms Race,
Policy - Proposed Split
----------------------------------------------------------------------
Date: Mon 8 Dec 86 21:52:59-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Subject: Administrivia - BITNET Distribution
I have been told that some of the current Arpanet congestion could
be cleared up if AIList (and the other lists) find BITNET moderators
willing to maintain a mailing list and forward the digest to all
interested BITNET sites. I could provide the initial list and could
continue to send the welcome message to new participants, so the
effort in maintaining the address list would be minimal. Is there
someone willing to perform this service?
-- Ken Laws
------------------------------
Date: 5 Dec 86 20:33:33 GMT
From: rutgers!princeton!puvax2!6111231%PUCC.BITNET@lll-crg.arpa
(Peter Wisnovsky)
Subject: Lisp for the Mac
Can anyone recommend a good Lisp for the Macintosh?
Also, if someone has a working version of Mac XLisp
(1.4 or higher) I would appreciate it if they would
mail it to me: the copy on MacServe is defective
and the author has not answered the mail I sent him.
Peter Wisnovsky
Virtual Address: UUCP: ...ihnp4!psuvax1!6111231@pucc.bitnet
Physical Adddress: 179 Prospect Avenue
Princeton, New Jersey 08540
(609)-734-7852
------------------------------
Date: 1 Dec 86 21:50:55 GMT
From: ubc-vision!razzell@beaver.cs.washington.edu (Dan Razzell)
Subject: Book enquiry
Has anybody read:
Hank Bromley, "Lisp Lore: A Guide to Programming the Lisp Machine",
1986, Kluwer Academic Publishers, ISBN 0-89838-220-3
This purports to be a tutorial introduction to building programs on
the Symbolics Lisp machine. Unfortunately, it is said to focus on
details of Zetalisp, and seems a bit lightweight, judging by the
table of contents that the publisher puts out in its brochure.
--
______________________________________________________
.↑.↑. Dan Razzell <razzell@vision.ubc.cdn>
. o o . Laboratory for Computational Vision
. >v< . University of British Columbia
______mm.mm___________________________________________
------------------------------
Date: Sat, 6 Dec 86 22:56:03 PST
From: Thomas Eric Brunner <brunner@spam.istc.sri.com>
Subject: little (*fun*) lisper, title/author?
When I worked in Bracknell, someone there was kind enough to let me read a
little booklet called (I think) "THE LITTLE LISPER". I haven't found it in
my post-move-to-sunny-California boxes...Does this ring a bell to anyone?
I'd like to buy a copy - it was a "nice", and illustrated, text on lisp.
Thanks for the pointers!
Eric
------------------------------
Date: Mon, 8 Dec 86 22:19 EST
From: Bill Pase <Pase@DOCKMASTER.ARPA>
Subject: Lisp Performance Benchmarks
Does anyone know if the Lisp performance benchmarks used in the book by
Gabriel are avaliable on the net somewhere?? /bill
------------------------------
Date: Thu, 4 Dec 86 08:39 ???
From: "William E. Hamilton, Jr."
Subject: AI Bibliographic format
I emphatically agree with the following comment from Robert Amsler:
>Date: Wed, 3 Dec 86 16:08:08 est
>From: amsler@flash.bellcore.com (Robert Amsler)
>Subject: AI Bibliographic format
>Can something be done to minimize the idiosyncratic special character
>usage in the bibliographies? I don't mind a format with tagged fields,
>but one only designed to be read by one particular text formatting system
>is a bit much for human readership. Is it really necessary to encode
>font changes as something as odd as \s-1...\s0 and I don't even know
>how to read entries such as the last one for the chapters in the
>Grishman and Kittridge Sublanguage book.
For those of us who don't know how to interpret the bibiographic
entries, why not circulate a specification for interpreting them, or
tell us where we can get the text formatting software
Amsler mentions. If this formatter is another piece of unix
esoterica, are there versions which work under vms?
Bill Hamilton
GM Research Labs
Computer Science Dept
313 986 1474
hamilton@gmr.com
------------------------------
Date: 4 Dec 86 10:19 PST
From: Stern.pasa@Xerox.COM
Subject: OOP in AI
The responses to my question of a few weeks ago, regarding publications
discussing OOP and AI:
Xerox PARC work in OOP has a long history, with a flurry of publishing
recently including AI Mag Spring 1976 for "Object-oriented Programming:
Themes and Variations" and Science, 28 Feb, Vol 231 "Perspectives on AI
Programming", both by D. Bobrow and M. Stefik. Other responses referred
me to our (note - I work for Xerox) LOOPS knowledge programming system.
Some responses referred me to KEE, ART (?) and Flavors. A couple of
people described their own work in progress on OOP languages or systems.
I had completely missed the SIGPlan issue (V21, #10, Oct 86) on the OOP
Workshop at IBM Watson Research Center organized by them and Peter
Wegner of Brown University, who includes his own excellent paper in the
proceedings.
Josh
------------------------------
Date: 4 Dec 86 00:16:44 GMT
From: sdcrdcf!burdvax!blenko@OBERON.USC.EDU (Tom Blenko)
Subject: Re: AI and the Arms Race
In article <863@tekchips.UUCP> willc@tekchips.UUCP (Will Clinger) writes:
|In article <2862@burdvax.UUCP> blenko@burdvax.UUCP (Tom Blenko) writes:
|>If Weizenbaum or anyone else thinks he or she can succeeded in weighing
|>possible good and bad applications, I think he is mistaken. Wildly
|>mistaken.
|>
|>Why does Weizenbaum think technologists are, even within the bounds of
|>conventional wisdom, competent to make such judgements in the first
|>place?
|
|Is this supposed to mean that professors of moral philosophy are the only
|people who should make moral judgments? Or is it supposed to mean that
|we should trust the theologians to choose for us? Or that we should leave
|all such matters to the politicians?
Not at all. You and I apparently agree that everyone does, willingly or
not, decide what they will do (not everyone would agree with even
that). I claim that they are simply unable to decide on the basis of
knowing what the good and bad consequences of introducing a technology
will be. And I am claiming that technologists, by and large, are less
competent than they might be by virtue of their ignorance of the
criteria professors of moral philosophy, theologians, nuclear plant
designers, and politicians bring to bear on such decisions.
I propose that most technologists decide, explicitly or implicitly,
that they will ride with the status quo, believing that
1) there are processes by which errant behavior on the part of
political or military leaders is corrected;
2) they may subsequently have the option of taking a
different role in deciding how the technology will be used;
3) the status quo is what they are most knowledgeable about,
and other options are difficult to evaluate;
4) there is always a finite likelihood that a decision may,
in retrospect, prove wrong, even though it was the best
choice available to them as decision-maker.
Such a decision is not that some set of consequences is, on balance,
good or bad, but that there is a process by which one may hope to
minimize catastrophic consequences of an imperfect, forced-choice
decision-making process.
|Representative democracy imposes upon citizens a responsibility for
|judging moral choices made by the leaders they elect. It seems to me
|that anyone presumed to be capable of judging others' moral choices
|should be presumed capable of making their own.
|
|It also seems to me that responsibility for judging the likely outcome
|of one's actions is not a thing that humans can evade, and I applaud
|Weizenbaum for pointing out that scientists and engineers bear this
|responsibility as much as anyone else.
I think the exhortations attributed to Weizenbaum are shallow and
simplistic. If one persuades oneself that one is doing what Weizenbaum
proposes, one simply defers the more difficult task of modifying one's
decision-making as further information/experience becomes available
(e.g., by revising a belief set such as that above).
Tom
------------------------------
Date: 4 Dec 86 19:05:55 GMT
From: adobe!greid@decwrl.dec.com (Glenn Reid)
Subject: Re: Proposed: a split of this group
>> I would like to suggest that this group be split into two groups;
>>one about "doing AI" and one on "philosophising about AI", the latter
>>to contain the various discussions about Turing tests, sentient computers,
>>and suchlike.
>
>Good idea. I was beginning to think the discussions of "when is an
>artifice intelligent" might belong in "talk.ai." I was looking for
>articles about how to do AI, and not finding any. The trouble is,
>"comp.ai.how-to" might have no traffic at all.
How do you "do" AI without talking about what it is that you are
trying to do?
Seems to me that discussions about cognitive modeling and Turing
tests and whatever else are perfectly acceptable here, if not
needed. But I could live without the "sentient computers" book
lists.
But you're right. Maybe we should post data structures or
something. Doesn't it always come down to data structures?
------------------------------
Date: Tue, 2 Dec 86 21:00:34 cst
From: Girish Kumthekar <kumthek%lsu.csnet@RELAY.CS.NET>
Subject: Proposed Split
I do support the idea of splitting the group(especially till people stop
abusing it by sending volumes on Searle & ugh .........). However I think
it may put more workload on Ken,and also may sometimes put him in a quandry
as to which group a message might belong.
Hope we can come up with a decent solution.
Girish Kumthekar
kumthek%lsu@CSNET-RELAY.csnet
Tel # (504)-388-1495
[Actually, routing messages to appropriate lists has seldom been
a problem -- but thanks for the thought. As a theoretical issue,
I agree with those who like to keep the digest flexible so that we
can be stimulated by ideas outside our own subfields. In practice,
though, the digest has gotten a bit large for a volunteer moderator
to handle (in addition to professional and familial duties). I am
worried that the Arpanet side of the list may collapse if I have
give up this hobby. Perhaps the rate of mailer problems and other
administrative matters will decrease as the network adjusts to all
the new conventions and hosts that have been added lately. -- KIL]
------------------------------
Date: 5 Dec 86 16:55:54 GMT
From: ihnp4!houxm!houem!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: Proposed: a split of this group
In <1991@adobe.UUCP>, greid@adobe.UUCP (Glenn Reid) replies to a
suggestion by jbn@glacier.UUCP (John Nagle)...
"that this group be split into two groups; one about 'doing AI' and one
on 'philosophising about AI', the latter to contain the various
discussions about Turing tests, sentient computers, and suchlike."
... with the question: "How do you 'do' AI without talking about what
it is that you are trying to do?"
Maybe we ought to split on the basis of what we are trying to do. I
suggested in my own response <720@houem.UUCP> that "we just try always
to create something more intelligent than we created before... That
way we can not only claim nearly instant success, but also continue to
have further successes without end."
That joke has a serious component. What some of us are trying to do
is imitate known intelligence, and particularly human intelligence.
Others (including myself) are just trying to do artificially as much as
possible of the work for which we now depend on human intelligence.
Actually, I am looking at an application, not inventing methods.
Those of us who are not trying to imitate human intelligence may
ultimately surpass human intelligence. But we can pursue our goal
without knowing how to measure or test artificial intelligence. My
main problem is that I don't know how the people who do it think about
their methods, so I want to hear about methods.
Marty
M. B. Brilliant (201)-949-1858
AT&T-BL HO 3D-520 houem!marty1
------------------------------
Date: 5 Dec 86 16:16:00 GMT
From: bsmith@p.cs.uiuc.edu
Subject: Re: Proposed: a split of this group
There is a serious problem with having any notesfile with "philosophy"
in its name--just look at talk.philosophy.misc. There, an endless
number of people who think philosophy consists of no more than just
spewing forth unsubstantiated opinions conduct what are laughably
called discussions but are really nothing other than name-calling
sessions (interlaced with ample supplies of vulgarities). Steven
Harnad has inspired discussions on this net which, perhaps, ought to
be in a separate notesfile, but I shudder to think what such a
notesfile would be like. One suggestion--given the ugliness of
talk.philosophy.misc, I think this new notesfile ought to be
moderated.
------------------------------
Date: 8 Dec 86 23:16:25 GMT
From: ladkin@kestrel.arpa (Peter Ladkin)
Subject: Re: Proposed: a split of this group
In article <603@ubc-cs.UUCP>, andrews@ubc-cs.UUCP (Jamie Andrews) writes:
> I should note at this point that, theoretically at least,
> there is already a newsgroup that is perfect for the
> philosophy of mind/intelligence/AI discussion. It's called
> talk.philosophy.tech, and has been talked about as an official
> newsgroup for some time.
I am the `moderator' of this group, which is dormant pending
submissions. There was some trouble starting it up, and so
I maintained a mailing list for a while. I no longer do so.
If there is interest, we can try to start it up again. The
interested parties just went back to their old groups when
we had so much trouble propagating it.
peter ladkin
ladkin@kestrel.arpa
------------------------------
End of AIList Digest
********************
∂09-Dec-86 0429 LAWS@SRI-STRIPE.ARPA AIList Digest V4 #282
Received: from SRI-STRIPE.ARPA by SAIL.STANFORD.EDU with TCP; 9 Dec 86 04:29:25 PST
Date: Mon 8 Dec 1986 22:31-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-STRIPE.ARPA>
Reply-to: AIList@SRI-STRIPE.ARPA
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA 94025
Phone: (415) 859-6467
Subject: AIList Digest V4 #282
To: AIList@SRI-STRIPE.ARPA
AIList Digest Tuesday, 9 Dec 1986 Volume 4 : Issue 282
Today's Topics:
Philosophy - Conscious Computers & Dijkstra Quote &
Brains vs. TTT as Criteria for Mind/Consciousness
----------------------------------------------------------------------
Date: Thu, 4 Dec 86 21:48:55 EST
From: "Keith F. Lynch" <KFL%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU>
Subject: Conscious computers
From: mcvax!ukc!rjf@seismo.css.gov (R.J.Faichney)
... to ascribe consciousness to something man-made, no matter how perfect
it's performance, will always require an effort of will. Nor could it
ever be intellectually justified. ... You may be willing to identify
with something which can do anything you can. I am not. And, though this
is obviously sheer guesswork, I'm willing to bet a lot of money that the
vast majority of people (*not* of AIers) would be with me.
Don't forget that "performance" doesn't just mean that it can play
chess or build a radio as well as you can. It also means it could write
one of these net messages, claiming that it is conscious but that it has
no way to be sure that anyone else is, etc.
The net is an excellent medium for Turing tests. Other than our
knowledge of the current state of the art, we have no evidence that any
given contributor is human rather than a machine.
Let me play the Turing game in reverse for a moment, and ask if you
would bet a lot of money that nobody would regard a computer as
conscious if it were to have written this message?
...Keith
------------------------------
Date: 4 Dec 86 15:16:24 EST
From: David.Harel@theory.cs.cmu.edu
Subject: another dijkstra quote
[Forwarded from the CMU bboard by Laws@SRI-STRIPE.]
I need a reference to another dijkstra quote:
"The question of whether computers can think is just like
the question of whether submarines can swim."
(this is a realy nice one, I think...)
Thanks in advance
David Harel x3742, harel@theory
------------------------------
Date: 4 Dec 86 07:55:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms>
Subject: brains vs. TTT as criteria for mind/consciousness
*** WARNING *** WARNING *** WARNING *** WARNING *** WARNING ***
***
*** Philosophobes (Sophophobes?) beware, industrial-strength
*** metaphysics dead ahead. The faint of heart should skip
*** forward about 350 lines...
***
*************************************************************************
Recall that the main issue here is how important a criterion
brainedness (as opposed to performance/the TTT) is for mindedness.
My main reason for asserting its importance is that I take "mind" to
mean, roughly, "conscious intelligence", where consciousness is
epitomized by such things as seeing colors, feeling pain, and
intelligence by playing chess, catching mice. No one has objected
strenously to this definition, so I'll assume we kind of agree.
While performance/TTT can be decisive evidence for intelligence, it
doesn't seem to me to be nearly as strong evidence for consciousness
out of context, ie when applied to non-brained entities. So in the
following I will try to assess in exactly what manner brains and/or
performance provide evidence for consciousness.
I had earlier written that one naively knows that his mind causes his
performance and scientifically knows that his brain causes his mind,
and that *both* of these provide justifiable bases for induction to
other entities.
S. Harnad, in reply, writes:
> Now on to the substance of your criticism. I think the crucial points
> will turn on the difference between what you call "naively know" and
> "scientifically know." It will also involve (like it or not) the issue
> of radical scepticicm, uncertainty and the intersubjectivity and validity of
> inferences and correlations. ...
>
> Scientific knowing is indirect and inferential. It is based on
> inference to the best explanation, the weight of the evidence, probability,
> Popperian (testability, falsifiability) considerations, etc. It is the
> paradigm for all empirical inquiry, and it is open to a kind of
> radical scepticism (scepticism about induction) that we all reasonably
> agree not to worry about...
>
> What you call "naive knowing," on the other hand (and about which you
> ask "*how* do I know this?") is the special preserve of 1st-hand,
> 1st-person subjective experience. It is "privileged" (no one has
> access to it but me), direct (I do not INFER from evidence that I am
> in pain, I know it directly), and it has been described as
> "incorrigible" (can I be wrong that I am feeling pain?). ..
>
> You say that I "naively know" that my performance
> is caused by my mind and I "scientifically know" that my mind is caused
> by my brain. ...Let me translate that: I know directly that my
> performance is caused by my mind, and I infer that my
> mind is caused by my brain. I'll go even further (now that we're
> steeped in phenomenology): It is part of my EXPERIENCE of my behavior
> that it is caused by my mind. [I happen to believe (inferentially) that
> "free will" is an illusion, but I admit it's a phenomenological fact
> that free will sure doesn't FEEL like an illusion.] We do not experience our
> performance in the passive way that we experience sensory input. We
> experience it AS something we (our minds) are CAUSING. (In fact, that's
> probably the source of our intuitions about what causation IS. I'll
> return to this later.)
>
> So there is a very big difference between my direct knowledge that my
> mind causes my behavior and my inference (say, in the dentist's chair)
> that my brain causes my mind. ...So, to put it briefly,
> what I've called the "informal component" of the Total Turing Test --
> does the candidate act as if it had a mind (i.e., roughly as I would)? --
> appeals to precisely those intuitions, and not the inferential kind, about
> brains, etc.
>
> In summary: There is a vast difference between knowing causes
> directly and inferring them; subjective phenomena are unique and
> radically different from other phenomena in that they confer this
> direct certainty; and inferences about other minds (i.e., about
> subjective phenomena in others) are parasitic on these direct
> experiences of causation, rather than on ordinary causal inference,
> which carries little or no intuitive force in the case of mental
> phenomena, in ourselves or others. And rightly not, because mind is a
> private, direct, subjective matter, not something that can be
> ascertained -- even in the normal inductive sense -- by public,
> indirect, objective correlations.
Completely agreed that one's knowledge about one's own consciousness
is attained in a very different way than is "ordinary" knowledge.
The issue is how the provenance of this knowledge bears upon its
application to the inductive process for deciding who else has a
mind. Rather than answer point-by-point, here is a scenario
which I think illustrates the issues:
Assume the following sequence of events:
A1. a rock falls on your foot (public external event)
B1. certain neural events occur within you (public internal event)
C1. you experience a pain "in your foot" (private)
D1. you get angry (private)
E1. some more neural events occur (public internal)
F1. you emit a stream of particularly evocative profanity (public
external)
(a more AI-oriented account would be:
A1'. someone asks you what 57+62 is
B1'. neural events
C1'. you "mentally" add the 7 and 2, etc..
D1'. you decide to respond
E1'. neural events
F1'. you emit "119"
)
Now, how much do you know, and how do you know it? Regarding the
mere existence and, to some level of detail, the quality, of these
events (ignoring any causal connections for the moment):
You know about A1 and F1 through "normal sensory means" of
finding out about the world.
You know about C1 and D1 through "direct incorrigible(?)
awareness" of your own consciousness (if you're not aware of
your own consciousness, who is?)
You know about B1 and E1 (upon reflection) only inferentially/
scientifically, via textbooks, microscopes, undergraduate courses...
Now, even though we know about these things in different ways,
they are all perfectly respectable cases of knowledge (not
necessarily certain, of course). It's not clear why we should
be shy about extrapolating *any* of these chunks of knowledge
in other cases...but let's go on.
What do we know about the causal connections among these events?
Well, if you're an epiphenomalist, you probably believe something
like:
C1,D1
/
A1 -> B1 -> E1 -> F1
the point being that mental events may be effects, but not causes,
especially of non-mental events. If you're an interactionist:
A1 -> B1 -> C1 -> D1 -> E1 -> F1
(Identity theorists believe B1=C1, E1=D1. Let's ignore them
for now. Although, for what it's worth, since they *identify*
neural and mental events, I assume that for them brainedness
would be, literally, the definitive criterion for mentality.)
Now, in either case, what is the basis for our belief in causation,
especially causation of and by C1 and D1? This raises tricky
questions - what, in general, is the rational basis for belief in
causation? Does it always involve an implicit appeal to a kind of
"scientific method" of experimentation, etc.? Can we ever detect
causation in a single instance, without any knowledge of similar
types of events? Does our feeling that we are causing some external
event have any value as evidence?
Fortunately, I think that we need to determine *neither* just what are
the rational grounds for belief in causation, *nor* whether the
epiphenomenal or interactionist picture is true. It's enough just to
agree (don't we?) that B1 is a proximate (more than A1, anyway) cause
of C1, and that we know this. Of course A1 is also a cause of C1,
via B1.
Now the only "fishy" thing about one's knowledge that B1 causes C1
is that C1 is a private event. But again, so what? If you're lying
on the operating table, and every time the neurosurgeon pokes you
at site X, you see a yellow patch, your inference about causal
connections is just as sound as if you walked in a room and
repeatedly flicked a switch to make the lights go on and off.
It's too bad that in the first case the "lights" are private, but
that in no way disbars the causation knowledge from being used
freely. The main point here is that our knowledge that Bx's cause
Cx's is entirely untainted and projectible. The mere fact that it is
ultimately grounded in our direct knowledge of our own experience in
no way disqualifies it (after all, isn't *all* knowledge ultimately
so grounded?). [more below on this]
Now then, suppose you see Mr. X undergoing a similar ordeal - A2, B2,
??, ??, E2, F2. You can see, with normal sensory means, that A2 is
like A1, and that F2 is like F1 (perhaps somewhat less evocative, but
similar). You can find out, with some trouble, that B2 is like B1 and
E2 is like E1. On the basis of these observations, you fearlessly
induce that Mr. X probably had a C2 and D2 similar to your C1 and D1,
ie that he too is conscious, even though you can never observe C2 and
D2, either through the normal means you used for A2, B2.. or the
"privileged" means you used for C1 and D1.
Absent any one of these visible similarities, the induction is
weakened. Suppose, for instance he had B2 but not A2 - well OK,
he was hallucinating a pain, maybe, but we're not as sure.
Suppose he had A2, but not B2 - gee, the thing dropped on his foot
and he yelled, but we didn't see the characteristic nerve firings..
hmmm (but at least he has a brain).
But now suppose we observe an AI-system:
A3. a rock falls on its foot
BB3. certain electronic events occur within it
C3. ??
D3. ??
EE3. some more electronic events occur
F3. it emits a stream of particularly evocative profanity
Granted A3 and F3 are similar to A1 and F1 - but you know that
BB3 is, in many ways, not similar to B1, nor EE3 to E1. Of course,
in some structural ways, they may be similar/isomorphic/whatever
to B1 and E1, but not nearly as similar as B2 and E2 are (Mr. X's
neural events). Surely your reasons for believing that C3, D3
exist/are similar to C1 and D1 are much weaker than for C2, D2,
especially given that we agree at least that B1 *caused* C1, and that
causation operates among relevantly similar events. Surely it's a
much safer bet that B2 is relevantly similar to B1 than is BB3, no?
(even given the decidedly imperfect state of current brain science.
We needn't know exactly WHAT our brain events are like before we
rationally conclude THAT they are similar. Eg, in 1700, people, if
you asked them, probably believed that stars were somewhat similar in
their internal structure, the way they worked, even though they
didn't have any idea what that structure was.) The point being that
brainedness supplies strong additional support to the hypothesis of
consciousness.
In fact, I'd be inclined to argue that brainedness is probably
stronger evidence (for a conscious entity who knows himself to be
brained) for consciousness than performance:
1. Proximate causation is more impressive than mediated causation.
Consider briefly what we would say about someone (a brained someone)
who lacked A and F, but had B and E, ie no outward stimulus or
response, but in whom we observed neural patterns very similar to
those normally characteristic of people feeling a sharp pain in their
foot (never mind the grammar). If I were told that he or the
AI-system (however sophisticated its performance) was in pain, and I
had to bet which one, I'd bet on him, because of the *proximate
causation* presumed to hold between B's and C's, but not established
at all between BB's and C's.
2. Causation between B's and C's is more firmly established than
between D's and F's. No one seriously doubts that brain events
affect one's state of consciousness. Whether one's consciousness
counts as a cause of performance is an open question. It certainly
feels as if it's true, but I know of no knock-down refutation of
epiphenomenalism. You seem to equivocate, sometimes simply saying
we KNOW that our intentions cause performance, other times doubting.
But the TTT criterion depends by analogy on questionable D-F causation;
the brain criterion depends on the less problematic B-C causation.
3. Induction is more firmly based on analogy from causes than effects.
If you believe in the scientific method, you believe "same cause ergo
same effect". The same effect *suggests* the same cause, but doesn't
strictly imply it, especially when the effect is not proximate.
But the TTT criterion is based on the latter (weaker) kind of
induction, the brain criterion on the former.
> Consider ordinary scientific knowledge about "unobservables," say,
> about quarks ...Were you to subtract this inferred entity from the
> (complete) theory, the theory would lose its capacity to account for
> all the (objective) data. That's the only reason we infer
> unobservables in the first place, in ordinary science: to help
> predict and causally explain all the observables. A complete, utopian
> scientific theory of the "mind," in radical contrast with this, will
> always be just as capable of accounting for all the (objective) data
> (i.e., all the observable data on what organisms and brains do) WITH
> or WITHOUT positing the existence of mind(s)!
Well, not so fast... I agree that others' minds are unobservable in a
way rather different from quarks - more on this below. The utopian
theory explains all the objective data, as you say, but of course
this is NOT all the data. Quite right, if I discount my own
consciousness, I have no reason whatever to believe in that of
others, but I decline the antecedent, thank you. All *my* data
includes subjective data and I feel perfectly serene concocting a
belief system which takes my own consciousness into account. If the
objective-utopian theory does not, then I simply conclude that it is
incomplete wrt to reality, even if not wrt, say, physics.
> In other words, the complete explanatory/predictive theory of organisms
> (and devices) WITH minds will be turing-indistinguishable from the
> complete explanatory/predicitive theory of organisms (and devices)
> WITHOUT minds, that simply behave in every observable way AS IF they
> had minds.
So the TTT is in principle incapable of distinguishing between minded
and unminded entities? Even I didn't accuse it of that.
If this theory does not explain the contents of my own consciousness,
it does not completely explain to me every thing observable to me.
Look, you agree, I believe, that "events in the world" include a
large set S, publicly observable, and a lot of little sets P1, P2,
... each of which is observable only by one individual. An
epistemological pain in the neck, I agree, but there it is. If
utopian theory explains S, but not P1, P2, why shouldn't I hazard a
slightly more ambitious formulation (eg, whenever you poke an x-like
site in someone's brain, they will experience a yellow patch...) ?
Don't we, in fact, all justly believe statements exactly like this ??
> That kind of inferential indeterminacy is a lot more serious than the
> underdetermination of ordinary scientific inferences about
> unobservables like quarks, gravitons or strings. And I believe that this
> amounts to a demonstration that all ordinary inferential bets (about
> brain-correlates, etc.) are off when it comes to the mind.
I don't get this at all ...
> The mind (subjectivity, consciousness, the capacity to have
> qualitative experience) is NEITHER an ordinary, intersubjectively
> verifiable objectively observable datum, as in normal science, NOR is
> it an ordinary unobservable inferred entity, forced upon us so that
> we can give a successful explanatory/predictive account of the
> objective data. Yet the mind is undoubtedly real. We know that,
> noninferentially, for one case: our own.
I couldn't agree more.
> Perhaps I should emphasize that in the two "correlations" we are
> talking about -- performance/mind and brain/mind -- the basis for the
> causal inference is radically different. The causal connection between
> my mind and my performance is something I know directly from being the
> performer. There is no corresponding intuition about causation from
> being the possessor of my brain. That's just a correlation, depending
> for its causal interpretation (if any), on what theory or metatheory I
> happen to subscribe to. That's why nothing compelling follows from
> being told what my insides are made of.
Addressing the latter point first, I think there's nothing wrong
with pre-theoretic beliefs about causation. If, every time I flip
the switch on the wall, the lights come on, I will develop a true
justified belief (=knowledge) about the causal links between the
switch and the light, even in the absence of any knowledge on my
part (or anyone else's for that matter) of how the thing works.
But the main issue here is the difference in the way we know about
the correlations. I think this difference is just incidental. We
are familiar with A and F type events, not so much with B and E
types, and so we develop intuitions regarding the former and not the
latter. If you had your brain poked by a neurosurgeon every day,
you'd quickly develop intuitions about brain-pokes and yellow
patches. Conversely, if you were strapped down or paralyzed from
birth, you would not develop intuitions about your mind's causal
powers.
Further, one may *scientifically* investigate the causal connections
among B1, C1, D1, and E1, and among A1 and F1 as well, as long as
you're willing to take people's word for it that they're in pain, etc
(and why not?). Just because we usually find out about some
correlations in certain ways doesn't mean we can't find out about
them in others as well.
And even if the difference weren't incidental it is unclear why
mysterious Cartesian-type intuitions about causation between Ds and
Fs are to be preferred to scientific inferential knowledge about Bs
and Cs as a basis for induction.
"It may be nonsense, but at least it's clever nonsense" - Tom Stoppard
John Cugini <Cugini@NBS-VMS>
------------------------------
End of AIList Digest
********************